coredns updates with AWS user stuff

This commit is contained in:
ducoterra
2023-03-31 14:53:45 -04:00
parent 3c0b8ab008
commit 4afa2c621d
11 changed files with 755 additions and 18 deletions

View File

@@ -18,12 +18,95 @@ sudo systemctl restart sshd
## Certbot for Cockpit
### AWS User
Create an AWS user which will have route53 access. This is required for certbot's route53
validation.
```bash
aws iam create-user --user-name replicator
```
You'll also need a policy which allows the user to modify the selected hosted zone:
(list with `aws route53 list-hosted-zones`)
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"route53:ListHostedZones",
"route53:GetChange"
],
"Resource": [
"*"
]
},
{
"Effect" : "Allow",
"Action" : [
"route53:ChangeResourceRecordSets"
],
"Resource" : [
"arn:aws:route53:::hostedzone/Z012820733346FJ0U4FUF"
]
}
]
}
```
Attach the policy to the user:
```bash
aws iam attach-user-policy \
--user-name replicator \
--policy-arn arn:aws:iam::892236928704:policy/certbot-route53-reeseapps
```
Generate credentials:
```bash
aws iam create-access-key --user-name replicator
```
On the host machine:
```bash
mkdir ~/.aws
vim ~/.aws/config
```
```conf
[profile default]
region=us-east-2
```
```bash
vim ~/.aws/credentials
```
```conf
[default]
aws_access_key_id=
aws_secret_access_key=
```
### Initial Setup
1. Create a "containers" user in AWS. Copy the permissions from Freenas
2. Create credentials
3. Add your credentials to root
4. Install the aws cli v2
```bash
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
```
5. Test your credentials with `aws route53 list-hosted-zones`
```bash

View File

@@ -21,7 +21,11 @@ curl -sfL https://get.k3s.io | sh -s - \
"--disable" \
"traefik" \
"--disable" \
"local-storage"
"local-storage" \
"--disable" \
"coredns" \
"--cluster-dns" \
"10.43.0.10"
```
Now you can change the ownership of (and copy) the k3s.yaml file:
@@ -41,6 +45,27 @@ We're using SQLite (because it's all we really need). The db is stored at
and restore it by copying it back. Note, you must also copy`/var/lib/rancher/k3s/server/token`
and use the contents as the token when restoring the backup as data is encrypted with that token.
### CoreDNS
We'll use our own coredns server so we can add custom hosts. This prevents the server from collapsing
if the internet drops out (something that apparently happens quite frequently)
```bash
helm repo add coredns https://coredns.github.io/helm
helm repo update
helm upgrade --install \
--namespace=kube-system \
--values coredns-values.yaml \
coredns \
coredns/coredns
```
You can test your dns config with
```bash
kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools
```
### Nginx Ingress
Now we need an ingress solution (preferably with certs for https). We'll be using nginx since
@@ -135,22 +160,6 @@ And now you can install the drivers:
helm repo add democratic-csi https://democratic-csi.github.io/charts/
helm repo update
# enc0 bulk storage (nfs)
helm upgrade \
--install \
--values truenas-nfs-enc0.yaml \
--namespace democratic-csi \
--create-namespace \
zfs-nfs-enc0 democratic-csi/democratic-csi
# enc1 fast storage (nfs)
helm upgrade \
--install \
--values truenas-nfs-enc1.yaml \
--namespace democratic-csi \
--create-namespace \
zfs-nfs-enc1 democratic-csi/democratic-csi
# enc0 bulk storage (iscsi)
helm upgrade \
--install \
@@ -159,7 +168,7 @@ helm upgrade \
--create-namespace \
zfs-iscsi-enc0 democratic-csi/democratic-csi
# enc1 bulk storage (iscsi)
# enc1 fast storage (iscsi)
helm upgrade \
--install \
--values truenas-iscsi-enc1.yaml \
@@ -174,6 +183,27 @@ You can test that things worked with:
kubectl apply -f democratic-csi-pvc-test.yaml
```
### Dashboard
Create the dashboard and an admin user with:
```bash
helm upgrade \
--install \
--namespace kubernetes-dashboard \
--create-namespace \
dashboard-user ./helm/dashboard-user
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
```
Then login with the following:
```bash
kubectl -n kubernetes-dashboard create token admin-user
kubectl proxy
```
### Nextcloud
The first chart we'll deploy is nextcloud.
@@ -208,6 +238,18 @@ helm upgrade --install \
--create-namespace
```
### Minecraft
Minecraft is available through the custom helm chart (including a server downloader)
```bash
helm upgrade --install \
nimcraft \
./helm/minecraft \
--namespace nimcraft \
--create-namespace
```
### Troubleshooting
Deleting a stuck namespace

347
coredns-values.yaml Normal file
View File

@@ -0,0 +1,347 @@
# Default values for coredns.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
image:
repository: coredns/coredns
tag: "1.9.4"
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
replicaCount: 1
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
## Create HorizontalPodAutoscaler object.
##
# autoscaling:
# minReplicas: 1
# maxReplicas: 10
# metrics:
# - type: Resource
# resource:
# name: cpu
# targetAverageUtilization: 60
# - type: Resource
# resource:
# name: memory
# targetAverageUtilization: 60
rollingUpdate:
maxUnavailable: 1
maxSurge: 25%
# Under heavy load it takes more that standard time to remove Pod endpoint from a cluster.
# This will delay termination of our pod by `preStopSleep`. To make sure kube-proxy has
# enough time to catch up.
# preStopSleep: 5
terminationGracePeriodSeconds: 30
podAnnotations: {}
# cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
serviceType: "ClusterIP"
prometheus:
service:
enabled: false
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9153"
monitor:
enabled: false
additionalLabels: {}
namespace: ""
interval: ""
service:
clusterIP: "10.43.0.10"
# loadBalancerIP: ""
# externalIPs: []
# externalTrafficPolicy: ""
# The name of the Service
# If not set, a name is generated using the fullname template
name: ""
annotations: {}
serviceAccount:
create: false
# The name of the ServiceAccount to use
# If not set and create is true, a name is generated using the fullname template
name: ""
annotations: {}
rbac:
# If true, create & use RBAC resources
create: true
# If true, create and use PodSecurityPolicy
pspEnable: false
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
# name:
# isClusterService specifies whether chart should be deployed as cluster-service or normal k8s app.
isClusterService: true
# Optional priority class to be used for the coredns pods. Used for autoscaler if autoscaler.priorityClassName not set.
priorityClassName: ""
# Default zone is what Kubernetes recommends:
# https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#coredns-configmap-options
servers:
- zones:
- zone: .
port: 53
# If serviceType is nodePort you can specify nodePort here
# nodePort: 30053
# hostPort: 53
plugins:
- name: errors
# Serves a /health endpoint on :8080, required for livenessProbe
- name: health
configBlock: |-
lameduck 5s
# Serves a /ready endpoint on :8181, required for readinessProbe
- name: ready
# Required to query kubernetes API for data
- name: kubernetes
parameters: cluster.local in-addr.arpa ip6.arpa
configBlock: |-
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
# Serves a /metrics endpoint on :9153, required for serviceMonitor
- name: prometheus
parameters: 0.0.0.0:9153
- name: file
parameters: /etc/coredns/democratic-csi-server.reeselink.db democratic-csi-server.reeselink.com
- name: file
parameters: /etc/coredns/democratic-csi-client.reeselink.db democratic-csi-client.reeselink.com
- name: forward
parameters: . /etc/resolv.conf
- name: cache
parameters: 30
- name: loop
- name: reload
- name: loadbalance
# Complete example with all the options:
# - zones: # the `zones` block can be left out entirely, defaults to "."
# - zone: hello.world. # optional, defaults to "."
# scheme: tls:// # optional, defaults to "" (which equals "dns://" in CoreDNS)
# - zone: foo.bar.
# scheme: dns://
# use_tcp: true # set this parameter to optionally expose the port on tcp as well as udp for the DNS protocol
# # Note that this will not work if you are also exposing tls or grpc on the same server
# port: 12345 # optional, defaults to "" (which equals 53 in CoreDNS)
# plugins: # the plugins to use for this server block
# - name: kubernetes # name of plugin, if used multiple times ensure that the plugin supports it!
# parameters: foo bar # list of parameters after the plugin
# configBlock: |- # if the plugin supports extra block style config, supply it here
# hello world
# foo bar
# Extra configuration that is applied outside of the default zone block.
# Example to include additional config files, which may come from extraVolumes:
# extraConfig:
# import:
# parameters: /opt/coredns/*.conf
extraConfig: {}
# To use the livenessProbe, the health plugin needs to be enabled in CoreDNS' server config
livenessProbe:
enabled: true
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 5
successThreshold: 1
# To use the readinessProbe, the ready plugin needs to be enabled in CoreDNS' server config
readinessProbe:
enabled: true
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 5
successThreshold: 1
# expects input structure as per specification https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#affinity-v1-core
# for example:
# affinity:
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: foo.bar.com/role
# operator: In
# values:
# - master
affinity: {}
# expects input structure as per specification https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#topologyspreadconstraint-v1-core
# for example:
# topologySpreadConstraints:
# - maxSkew: 1
# topologyKey: zone
# whenUnsatisfiable: DoNotSchedule
# labelSelector:
# matchLabels:
# foo: bar
topologySpreadConstraints: []
# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
# expects input structure as per specification https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#toleration-v1-core
# for example:
# tolerations:
# - key: foo.bar.com/role
# operator: Equal
# value: master
# effect: NoSchedule
tolerations: []
# https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
podDisruptionBudget: {}
# configure custom zone files as per https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/
zoneFiles:
- filename: democratic-csi-server.reeselink.db
domain: democratic-csi-server.reeselink.com
contents: |
democratic-csi-server.reeselink.com. IN SOA sns.dns.icann.org. noc.dns.icann.org. 2015082541 7200 3600 1209600 3600
democratic-csi-server.reeselink.com. IN NS b.iana-servers.net.
democratic-csi-server.reeselink.com. IN NS a.iana-servers.net.
democratic-csi-server.reeselink.com. IN A 172.20.0.1
- filename: democratic-csi-client.reeselink.db
domain: democratic-csi-client.reeselink.com
contents: |
democratic-csi-client.reeselink.com. IN SOA sns.dns.icann.org. noc.dns.icann.org. 2015082541 7200 3600 1209600 3600
democratic-csi-client.reeselink.com. IN NS b.iana-servers.net.
democratic-csi-client.reeselink.com. IN NS a.iana-servers.net.
democratic-csi-client.reeselink.com. IN A 127.0.0.1
# optional array of extra volumes to create
extraVolumes: []
# - name: some-volume-name
# emptyDir: {}
# optional array of mount points for extraVolumes
extraVolumeMounts: []
# - name: some-volume-name
# mountPath: /etc/wherever
# optional array of secrets to mount inside coredns container
# possible usecase: need for secure connection with etcd backend
extraSecrets: []
# - name: etcd-client-certs
# mountPath: /etc/coredns/tls/etcd
# - name: some-fancy-secret
# mountPath: /etc/wherever
# To support legacy deployments using CoreDNS with the "k8s-app: kube-dns" label selectors.
# See https://github.com/coredns/helm/blob/master/charts/coredns/README.md#adopting-existing-coredns-resources
# k8sAppLabelOverride: "kube-dns"
# Custom labels to apply to Deployment, Pod, Configmap, Service, ServiceMonitor. Including autoscaler if enabled.
customLabels: {}
# Custom annotations to apply to Deployment, Pod, Configmap, Service, ServiceMonitor. Including autoscaler if enabled.
customAnnotations: {}
## Alternative configuration for HPA deployment if wanted
#
hpa:
enabled: false
minReplicas: 1
maxReplicas: 2
metrics: []
## Configue a cluster-proportional-autoscaler for coredns
# See https://github.com/kubernetes-incubator/cluster-proportional-autoscaler
autoscaler:
# Enabled the cluster-proportional-autoscaler
enabled: false
# Number of cores in the cluster per coredns replica
coresPerReplica: 256
# Number of nodes in the cluster per coredns replica
nodesPerReplica: 16
# Min size of replicaCount
min: 0
# Max size of replicaCount (default of 0 is no max)
max: 0
# Whether to include unschedulable nodes in the nodes/cores calculations - this requires version 1.8.0+ of the autoscaler
includeUnschedulableNodes: false
# If true does not allow single points of failure to form
preventSinglePointFailure: true
## Optionally specify some extra flags to pass to cluster-proprtional-autoscaler.
## Useful for e.g. the nodelabels flag.
# customFlags:
# - --nodelabels=topology.kubernetes.io/zone=us-east-1a
image:
repository: registry.k8s.io/cpa/cluster-proportional-autoscaler
tag: "1.8.5"
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
# Optional priority class to be used for the autoscaler pods. priorityClassName used if not set.
priorityClassName: ""
# expects input structure as per specification https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#affinity-v1-core
affinity: {}
# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
# expects input structure as per specification https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#toleration-v1-core
tolerations: []
# resources for autoscaler pod
resources:
requests:
cpu: "20m"
memory: "10Mi"
limits:
cpu: "20m"
memory: "10Mi"
# Options for autoscaler configmap
configmap:
## Annotations for the coredns-autoscaler configmap
# i.e. strategy.spinnaker.io/versioned: "false" to ensure configmap isn't renamed
annotations: {}
# Enables the livenessProbe for cluster-proportional-autoscaler - this requires version 1.8.0+ of the autoscaler
livenessProbe:
enabled: true
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 5
failureThreshold: 3
successThreshold: 1
deployment:
enabled: true
name: ""
## Annotations for the coredns deployment
annotations: {}

View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,24 @@
apiVersion: v2
name: dashboard
description: A Kubernetes Dashboard User Deployment
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

View File

@@ -0,0 +1,20 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard

View File

23
helm/minecraft/.helmignore Executable file
View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

23
helm/minecraft/Chart.yaml Executable file
View File

@@ -0,0 +1,23 @@
apiVersion: v2
name: Minecraft
description: A Simple Minecraft Chart
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
appVersion: 1.16.0

View File

@@ -0,0 +1,107 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}
data:
eula.txt: |
eula=true
server.properties: {{ toYaml .Values.server_props | indent 2 }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
spec:
selector:
matchLabels:
app: {{ .Release.Name }}
strategy:
type: Recreate
template:
metadata:
labels:
app: {{ .Release.Name }}
spec:
initContainers:
- name: get-version
image: {{ .Values.get_server.image }}
imagePullPolicy: Always
env:
- name: SERVER_VERSION
value: {{ .Values.server_version | quote }}
volumeMounts:
- mountPath: /downloads
name: data
containers:
- name: {{ .Release.Name }}
image: {{ .Values.image }}
imagePullPolicy: Always
ports:
- containerPort: 25565
volumeMounts:
- mountPath: /mc_data
name: data
- name: properties
mountPath: /mc_data/server.properties
subPath: server.properties
- name: properties
mountPath: /mc_data/eula.txt
subPath: eula.txt
tty: true
stdin: true
env:
- name: MAX_RAM
value: {{ .Values.max_ram | quote }}
- name: MIN_RAM
value: "1"
resources:
requests:
memory: {{ div .Values.max_ram 2 }}Gi
cpu: 1m
limits:
memory: {{ add 1 .Values.max_ram }}Gi
cpu: {{ .Values.max_cpu | quote }}
volumes:
- name: data
persistentVolumeClaim:
claimName: {{ .Release.Name }}
# emptyDir:
# sizeLimit: 2Gi
- name: properties
configMap:
name: {{ .Release.Name }}
securityContext:
fsGroup: 2000
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Release.Name }}
annotations:
"helm.sh/resource-policy": keep
spec:
storageClassName: zfs-iscsi-enc1
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 32Gi
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}
spec:
selector:
app: {{ .Release.Name }}
ports:
- port: {{ .Values.port }}
targetPort: 25565
name: {{ .Release.Name }}
type: LoadBalancer

45
helm/minecraft/values.yaml Executable file
View File

@@ -0,0 +1,45 @@
image: ducoterra/minecraft:latest
get_server:
image: ducoterra/get-minecraft:2.0.0
server_version: "1.19.4"
port: 25565
max_cpu: 4
max_ram: 8
server_props: |
max-tick-time=60000
generator-settings=
force-gamemode=false
allow-nether=true
gamemode=0
broadcast-console-to-ops=true
enable-query=false
player-idle-timeout=0
difficulty=2
spawn-monsters=true
op-permission-level=4
pvp=true
snooper-enabled=true
level-type=default
hardcore=false
enable-command-block=false
max-players=20
network-compression-threshold=256
resource-pack-sha1=
max-world-size=29999984
server-port=25565
server-ip=
spawn-npcs=true
allow-flight=true
level-name=world
view-distance=32
resource-pack=
spawn-animals=true
white-list=true
generate-structures=true
online-mode=true
max-build-height=512
level-seed=
prevent-proxy-connections=false
use-native-transport=true
motd=Welcome to Nimcraft!
enable-rcon=false