Compare commits

..

2 Commits

Author SHA1 Message Date
b97f41eb70 fedora updates and systemd renames
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 58s
2025-10-30 22:51:53 -04:00
6df02e8dff k0s updates 2025-10-30 22:51:17 -04:00
21 changed files with 488 additions and 103 deletions

View File

@@ -1,3 +1,13 @@
# Kubernetes # Kubernetes
See [k3s](/active/systemd_k3s/k3s.md) ## CLI Tools
kubectl: <https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/>
helm: <https://helm.sh/docs/intro/install/>
## Install a Kubernetes Server
For k3s, see [k3s](/active/systemd_k3s/k3s.md)
For k0s, see [k0s](/active/systemd_k0s/k0s.md)

View File

@@ -0,0 +1,7 @@
service:
http:
type: LoadBalancer
externalTrafficPolicy: Local
ssh:
type: LoadBalancer
externalTrafficPolicy: Local

View File

@@ -21,14 +21,10 @@ ingress:
persistence: persistence:
enabled: true enabled: true
create: true create: true
storageClass: zfs-iscsi-enc0
claimName: data-gitea-staging-0 claimName: data-gitea-staging-0
annotations: annotations:
"helm.sh/resource-policy": keep "helm.sh/resource-policy": keep
global:
storageClass: zfs-iscsi-enc1
postgresql: postgresql:
enabled: true enabled: true
image: image:
@@ -36,7 +32,6 @@ postgresql:
primary: primary:
persistence: persistence:
enabled: true enabled: true
storageClass: zfs-iscsi-enc1
annotations: annotations:
"helm.sh/resource-policy": keep "helm.sh/resource-policy": keep

View File

@@ -1,6 +1,7 @@
# Gitea # Gitea
- [Gitea](#gitea) - [Gitea](#gitea)
- [Demo](#demo)
- [Staging](#staging) - [Staging](#staging)
- [Install](#install) - [Install](#install)
- [Backup and Restore](#backup-and-restore) - [Backup and Restore](#backup-and-restore)
@@ -14,6 +15,17 @@ they decide to change things. This is the first chart (besides ingress-nginx) wh
we need to pay attention to the MetalLB annotation. This has been set in the values.yaml we need to pay attention to the MetalLB annotation. This has been set in the values.yaml
file. file.
## Demo
```bash
helm upgrade --install \
gitea \
gitea-charts/gitea \
--values active/kubernetes_gitea/gitea-demo-values.yaml \
--namespace gitea \
--create-namespace
```
## Staging ## Staging
There is a `gitea-staging.yaml` file with staging values. This should be installed in There is a `gitea-staging.yaml` file with staging values. This should be installed in

View File

@@ -10,8 +10,14 @@
```bash ```bash
# Download the updated template from github # Download the updated template from github
kubectl kustomize "github.com/rancher/local-path-provisioner/deploy?ref=v0.0.31" > active/kubernetes_local-path-provisioner/local-path-storage.yaml kubectl kustomize "github.com/rancher/local-path-provisioner/deploy?ref=v0.0.32" > active/kubernetes_local-path-provisioner/local-path-storage.yaml
# Apply customizations (ssd/hdd storage, read write many support) # Apply customizations (ssd/hdd storage, read write many support)
kubectl kustomize active/kubernetes_local-path-provisioner | kubectl apply -f - kubectl kustomize active/kubernetes_local-path-provisioner | kubectl apply -f -
``` ```
Mark the class as default
```bash
kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
```

View File

@@ -176,7 +176,7 @@ spec:
fieldPath: metadata.namespace fieldPath: metadata.namespace
- name: CONFIG_MOUNT_PATH - name: CONFIG_MOUNT_PATH
value: /etc/config/ value: /etc/config/
image: rancher/local-path-provisioner:v0.0.31 image: rancher/local-path-provisioner:v0.0.32
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
name: local-path-provisioner name: local-path-provisioner
volumeMounts: volumeMounts:

View File

@@ -5,6 +5,15 @@ below installs nimcraft. For each installation you'll want to create your own va
with a new port. The server-downloader is called "minecraft_get_server" and is available on with a new port. The server-downloader is called "minecraft_get_server" and is available on
[Github](https://github.com/ducoterra/minecraft_get_server). [Github](https://github.com/ducoterra/minecraft_get_server).
After installing, you can run admin commands (like whitelisting players) by
attaching to the container:
```bash
kubectl attach -it <pod>
> /whitelist add ducoterra
```
## Testing ## Testing
```bash ```bash

View File

@@ -56,10 +56,10 @@ spec:
value: "1" value: "1"
resources: resources:
requests: requests:
memory: {{ div .Values.max_ram 2 }}Gi memory: "{{ div .Values.max_ram 2 }}Gi"
cpu: 1m cpu: 1m
limits: limits:
memory: {{ add 1 .Values.max_ram }}Gi memory: "{{ add 1 .Values.max_ram }}Gi"
cpu: {{ .Values.max_cpu | quote }} cpu: {{ .Values.max_cpu | quote }}
volumes: volumes:
- name: data - name: data

View File

@@ -5,7 +5,6 @@ metadata:
annotations: annotations:
"helm.sh/resource-policy": keep "helm.sh/resource-policy": keep
spec: spec:
storageClassName: ssd
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:

View File

@@ -2,11 +2,7 @@ apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: {{ .Release.Name }} name: {{ .Release.Name }}
annotations:
metallb.universe.tf/address-pool: "external"
external-dns.alpha.kubernetes.io/hostname: {{ .Release.Name }}.reeseapps.com
spec: spec:
ipFamilies: ["IPv6"]
externalTrafficPolicy: Cluster externalTrafficPolicy: Cluster
selector: selector:
app: {{ .Release.Name }} app: {{ .Release.Name }}

View File

@@ -561,7 +561,7 @@ dnf upgrade --refresh
reboot reboot
# Set the releasever to the version you want to upgrade to # Set the releasever to the version you want to upgrade to
dnf system-upgrade download --releasever=42 dnf system-upgrade download --releasever=43
dnf system-upgrade reboot dnf system-upgrade reboot
``` ```
@@ -666,6 +666,8 @@ mkfs.xfs /dev/mapper/vg0-docker--data
```bash ```bash
nmcli connection modify Wired\ connection\ 1 ipv6.addr-gen-mode eui64 nmcli connection modify Wired\ connection\ 1 ipv6.addr-gen-mode eui64
nmcli connection modify Wired\ connection\ 1 ipv6.ip6-privacy disabled
systemctl restart NetworkManager
``` ```
### Install and Enable Cockpit ### Install and Enable Cockpit

View File

@@ -603,7 +603,7 @@ flatpak install org.gnome.Evolution
# Virtualization # Virtualization
sudo dnf group install --with-optional virtualization sudo dnf group install --with-optional virtualization
sudo systemctl start libvirtd sudo systemctl enable --now libvirtd virtnetworkd.service
``` ```
## NVM ## NVM

View File

@@ -4,6 +4,7 @@
- [Notes](#notes) - [Notes](#notes)
- [Quickstart Debugging Setup](#quickstart-debugging-setup) - [Quickstart Debugging Setup](#quickstart-debugging-setup)
- [Quickstart Production Setup](#quickstart-production-setup) - [Quickstart Production Setup](#quickstart-production-setup)
- [NFS](#nfs)
- [Tips](#tips) - [Tips](#tips)
- [Adding a user](#adding-a-user) - [Adding a user](#adding-a-user)
- [Adding a Smart Card Certificate](#adding-a-smart-card-certificate) - [Adding a Smart Card Certificate](#adding-a-smart-card-certificate)
@@ -100,17 +101,24 @@ Now skip to [Get PIV Working](#piv)
<https://www.freeipa.org/page/Quick_Start_Guide> <https://www.freeipa.org/page/Quick_Start_Guide>
- Set your hostname to your server's fqdn with `hostnamectl hostname freeipa.reeselink.com` 1. Set your hostname to your server's fqdn with `hostnamectl hostname freeipa.reeselink.com`
- Ensure you have a DNS entry pointing to your host 2. If you want freeipa to manage your DNS, make sure you don't have a DNS address pointing to your domain
- Open ports: 3. Open freeipa ports
```bash ```bash
firewall-cmd --add-service=freeipa-ldap --add-service=freeipa-ldaps --permanent firewall-cmd --add-service=freeipa-4
firewall-cmd --add-service=freeipa-4 --permanent
firewall-cmd --reload firewall-cmd --reload
``` ```
- Set a permanet DNS resolver: `sudo echo "nameserver 1.1.1.1" > /etc/resolv.conf` 4. Set a permanent DNS resolver
- Disable NetworkManager DNS management
```bash
rm /etc/resolv.conf
echo "nameserver 1.1.1.1" > /etc/resolv.conf
```
5. Disable NetworkManager DNS management
```bash ```bash
vim /etc/NetworkManager/NetworkManager.conf vim /etc/NetworkManager/NetworkManager.conf
@@ -119,22 +127,73 @@ vim /etc/NetworkManager/NetworkManager.conf
dns=none dns=none
``` ```
Note, if installing for local use only, set /etc/hosts and reply "yes" to configure dns. 6. Reboot
7. Ensure resolv.conf hasn't been repopulated: `cat /etc/resolv.conf`
```bash 8. Install freeipa: `dnf install -y freeipa-server freeipa-server-dns`
vim /etc/hosts 9. Install the server (mostly choose defaults and sane options): `ipa-server-install`
10. Authenticate as admin: `kinit admin`
192.168.122.100 freeipa.reeselink.com
```
- Restart NetworkManager: `systemctl restart NetworkManager`
- Ensure resolv.conf hasn't been repopulated: `cat /etc/resolv.conf`
- Install freeipa: `dnf install -y freeipa-server freeipa-server-dns`
- Install the server (mostly choose defaults and sane options): `ipa-server-install`
- Authenticate as admin: `kinit admin`
Now skip to [Get PIV Working](#piv) Now skip to [Get PIV Working](#piv)
## NFS
<https://www.techrepublic.com/article/kerberos-authentication-with-nfsv4/>
```bash
mkdir /exports
chmod 1777 /exports
mount --bind /srv /exports/srv
```
What this does is remounts /srv to /exports/srv. Effectively, this means that
/srv can be accessed directly, or via /exports/srv and changes in one location
reflect in the other.
To make this persistent, add the mount command above to /etc/rc.d/rc.local or
some similar executed-at-boot script.
Next, edit `/etc/sysconfig/nfs` and enable the SECURE_NFS option:
```conf
SECURE_NFS="yes"
```
Then edit `/etc/exports`. The “/exports” entry has the “fsid=0” option, which
tells NFS that this is the “root” export.
```fstab
/exports gss/krb5(rw,sync,fsid=0,insecure,no_subtree_check)
/exports/srv gss/krb5(rw,sync,nohide,insecure,no_subtree_check)
```
Create the server principal for the NFS server and add it to the keytab file on
the server using kadmin (usually /etc/krb5.keytab):
```bash
kadmin.local
kadmin.local: addprinc -randkey nfs/nfsserver.domain.com
kadmin.local: ktadd -e des-cbc-crc:normal -k /etc/krb5.keytab nfs/nfsserver.domain.com
kadmin.local: quit
```
Edit `/etc/idmapd.conf` and make sure the Nobody-User and Nobody-Group options
are correct (i.e. on Red Hat Enterprise Linux and Fedora, use the “nfsnobody”
user and group, other distributions may just use “nobody”)
```bash
systemctl restart nfs rpcidmapd
```
open TCP port 2049 for use with NFsv4
```bash
firewall-cmd --add-service=nfs
firewall-cmd --add-service=nfs --permanent
firewall-cmd --reload
```
## Tips ## Tips
```bash ```bash

124
active/software_k0s/k0s.md Normal file
View File

@@ -0,0 +1,124 @@
# K0s
## Install Single Node Cluster
<https://docs.k0sproject.io/v0.11.0/k0s-single-node/>
```bash
# Allow all traffic in
firewall-cmd --set-default-zone=trusted
# Install k0s cli
curl -sSLf https://get.k0s.sh | sudo sh
# Setup the config
k0s config create > k0s.yaml
# Install single node cluster controller/node
k0s install controller -c k0s.yaml --enable-worker
# Start and enable the service
systemctl enable --now k0scontroller
# Enable bash completion
echo 'source <(k0s completion bash)' >>~/.bashrc
source ~/.bashrc
# Make an admin user
mkdir ~/.kube
k0s kubeconfig create --groups "system:m asters" admin > ~/.kube/config
# Remove the taint that prevents scheduling on the controller
kubectl edit node
```
## Install Multi Node Cluster
<https://docs.k0sproject.io/v0.11.0/k0s-multi-node/>
Install the controller on the controller machine
```bash
# Allow all traffic in
firewall-cmd --set-default-zone=trusted
# Install k0s cli
curl -sSLf https://get.k0s.sh | sudo sh
# Save default config
k0s config create > k0s.yaml
# Install the controller
k0s install controller
# Enable the controller
systemctl enable --now k0scontroller
# Enable bash completion
echo 'source <(k0s completion bash)' >>~/.bashrc
source ~/.bashrc
# Make an admin user (scp ~/.kube/config to your operator machine)
# kubectl config set-context --current --namespace kube-system
mkdir ~/.kube
k0s kubeconfig create --groups "system:masters" admin > ~/.kube/config
# Generate a worker join token
k0s token create --role=worker > worker0-token
```
Now on the worker machine, install the worker
```bash
# Allow all traffic in
firewall-cmd --set-default-zone=trusted
# On the operator, copy the token file from the controller to the worker
scp vm-k0s-controller:worker0-token vm-k0s-worker:token-file
# Install k0s cli
curl -sSLf https://get.k0s.sh | sudo sh
# Join the worker
k0s install worker --token-file token-file
# Start the service
systemctl enable --now k0sworker
# Enable bash completion
echo 'source <(k0s completion bash)' >>~/.bashrc
source ~/.bashrc
```
## Uninstall
```bash
systemctl stop k0scontroller
k0s reset
reboot
```
## Install Metallb
<https://docs.k0sproject.io/v1.26.0+k0s.0/examples/metallb-loadbalancer/>
1. Create a VLAN with a dedicated subnet for Metallb. Disable DHCP.
2. Attach this new VLAN to your worker nodes
3. Assign the worker nodes an address within the created network.
4. Install Metallb. Check `active/software_k0s/metallb-configmap.yaml` before proceeding.
```bash
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/metallb.yaml
kubectl apply -f active/software_k0s/metallb-configmap.yaml
```
### Uninstall Metallb
```bash
kubectl delete -f active/software_k0s/metallb-configmap.yaml
kubectl delete -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/metallb.yaml
kubectl delete -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/namespace.yaml
```

View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.123.100-192.168.123.254

View File

@@ -0,0 +1,5 @@
# Packer
## Qemu Builds
<https://github.com/goffinet/packer-kvm?tab=readme-ov-file>

View File

@@ -2,7 +2,7 @@
- [QEMU](#qemu) - [QEMU](#qemu)
- [QCOW2](#qcow2) - [QCOW2](#qcow2)
- [Restore qcow snapshots](#restore-qcow-snapshots) - [QCOW2 Snapshots](#qcow2-snapshots)
- [Convert qcow to bootable drive](#convert-qcow-to-bootable-drive) - [Convert qcow to bootable drive](#convert-qcow-to-bootable-drive)
- [Convert bootable drive to qcow](#convert-bootable-drive-to-qcow) - [Convert bootable drive to qcow](#convert-bootable-drive-to-qcow)
- [ISO](#iso) - [ISO](#iso)
@@ -17,7 +17,7 @@ VDI (VirtualBox) vdi
VHD (Hyper-V) vpc VHD (Hyper-V) vpc
VMDK (VMware) vmdk VMDK (VMware) vmdk
### Restore qcow snapshots ### QCOW2 Snapshots
```bash ```bash
# Create a snapshot # Create a snapshot

View File

@@ -0,0 +1,11 @@
<network>
<name>dual-stack</name>
<forward mode="nat"/>
<domain name="dual-stack"/>
<ip address="192.168.100.1" netmask="255.255.255.0">
<dhcp>
<range start="192.168.100.2" end="192.168.100.254"/>
</dhcp>
</ip>
<ip family="ipv6" address="fd4d:58e7:17f6:1::1" prefix="64"/>
</network>

View File

@@ -0,0 +1,7 @@
<network>
<name>dual-stack-no-dhcp</name>
<forward mode="nat"/>
<domain name="dual-stack-no-dhcp"/>
<ip address="192.168.123.1" netmask="255.255.255.0"/>
<ip family="ipv6" address="fd4d:58e7:17f6:1::1" prefix="64"/>
</network>

View File

@@ -2,6 +2,19 @@
Virtual Machine Management Virtual Machine Management
- [Virsh](#virsh)
- [Before you Begin](#before-you-begin)
- [Useful Virsh Commands](#useful-virsh-commands)
- [Virsh Networking](#virsh-networking)
- [Create a Virtual Network](#create-a-virtual-network)
- [Attach a New Virtual Network](#attach-a-new-virtual-network)
- [Set a Static IP](#set-a-static-ip)
- [Creating VMs](#creating-vms)
- [Create VM with No Graphics and use an Existing QCOW2 Disk](#create-vm-with-no-graphics-and-use-an-existing-qcow2-disk)
- [Create VM with Graphics using an ISO Installation Disk](#create-vm-with-graphics-using-an-iso-installation-disk)
- [Create VM using Host Device as Disk](#create-vm-using-host-device-as-disk)
- [Snapshots](#snapshots)
## Before you Begin ## Before you Begin
1. Add yourself to the `qemu` and `libvirt` groups: `usermod -aG libvirt,qemu ducoterra` 1. Add yourself to the `qemu` and `libvirt` groups: `usermod -aG libvirt,qemu ducoterra`
@@ -10,8 +23,9 @@ Virtual Machine Management
4. Allow group write access to images: `chmod 770 /var/lib/libvirt/images` 4. Allow group write access to images: `chmod 770 /var/lib/libvirt/images`
5. Allow group write access to iso: `chmod 770 /var/lib/libvirt/iso` 5. Allow group write access to iso: `chmod 770 /var/lib/libvirt/iso`
6. Tell virsh to connect to your root system rather than your user: `export LIBVIRT_DEFAULT_URI='qemu:///system'` 6. Tell virsh to connect to your root system rather than your user: `export LIBVIRT_DEFAULT_URI='qemu:///system'`
7. Export your editor so virsh knows what to use: `export EDITOR=vim`
## VM Details ## Useful Virsh Commands
```bash ```bash
# Show node info # Show node info
@@ -22,37 +36,6 @@ osinfo-query os
# List all current machines # List all current machines
virsh list --all virsh list --all
```
## Creating VMs
If you have [an osbuild
image](/active/software_osbuild/image_builder.md#installing) you can run
```bash
sudo systemctl start osbuild-composer.socket
composer-cli compose list
composer-cli compose image --filename /var/lib/libvirt/images/fedora-42-test.qcow2 image-uuid
```
now to have a qcow2 available during install.
```bash
# `--location /path/to/image.iso` supplies a disk installer. (Remove `--import`)
# `--import` skips the installation process.
# `--graphics spice --video qxl --channel spicevmc` installs graphics
# `--console pty,target.type=virtio` adds a console connection
# For any command, use `virt-install --arg=?` to see all available options
virt-install \
--name fedora42-test \
--description "Test VM with Fedora42" \
--cpu host-model --vcpus sockets=1,cores=8,threads=2 \
--ram=8192 \
--os-variant=fedora41 \
--import --disk path=/var/lib/libvirt/images/fedora-42-test.qcow2,bus=virtio \
--network bridge:virbr0 \
--graphics none \
--console pty,target.type=virtio
# Connect to console VM # Connect to console VM
virsh console fedora42-test virsh console fedora42-test
@@ -78,3 +61,178 @@ virsh undefine <domain>
# Remove a VM including storage # Remove a VM including storage
virsh undefine <domain> --remove-all-storage virsh undefine <domain> --remove-all-storage
``` ```
## Virsh Networking
### Create a Virtual Network
Creating a new network will require an XML configuration file. To see the
default network's configuration, use
```bash
virsh net-dumpxml default > virbr0.xml
```
To create a dual-stack network, use the following. (Note, I generated a unique
local ipv6 address [here](https://www.unique-local-ipv6.com/)).
```xml
<network>
<name>dual-stack</name>
<forward mode="nat"/>
<domain name="dual-stack"/>
<ip address="192.168.100.1" netmask="255.255.255.0">
<dhcp>
<range start="192.168.100.2" end="192.168.100.254"/>
</dhcp>
</ip>
<ip family="ipv6" address="fd4d:58e7:17f6:1::1" prefix="64"/>
</network>
```
I've already defined this network in `active/software_virsh/dual-stack-dhcp.xml`. Install it with
```bash
# Define and autostart the network
virsh net-define active/software_virsh/dual-stack-dhcp.xml
virsh net-start dual-stack-dhcp
virsh net-autostart dual-stack-dhcp
# List networks to ensure it created
virsh net-list --all
# Get the UUID of the created network
virsh net-uuid dual-stack-dhcp
```
### Attach a New Virtual Network
```bash
export VM_NAME=my_vm
virsh attach-interface \
--type bridge \
--source virbr1 \
--model virtio \
--config \
--live \
--domain ${VM_NAME}
```
### Set a Static IP
To set a static IP, run `virsh net-edit default` and add the following between `<dhcp>` and `</dhcp>`
```xml
<host mac='xx:xx:0x:xx:xx:1x' name='virtual_machine' ip='1xx.1xx.1xx.xx'/>
```
Then run
```bash
# `--location /path/to/image.iso` supplies a disk installer. (Remove `--import`)
# `--import` skips the installation process.
# `--graphics spice --video qxl --channel spicevmc` installs graphics
# `--console pty,target.type=virtio` adds a console connection
# For any command, use `virt-install --arg=?` to see all available options
virsh net-destroy default
virsh net-start default
virsh shutdown virtual_machine
systemctl restart libvirtd
virsh start virtual_machine
```
## Creating VMs
If you have [an osbuild
image](/active/software_osbuild/image_builder.md#installing) you can run the
following to generate a qcow2 disk image. Then you can [create a VM with an
existing qcow2
disk](#create-vm-with-no-graphics-and-use-an-existing-qcow2-disk) and skip the
installation process altogether.
```bash
sudo systemctl start osbuild-composer.socket
composer-cli compose list
composer-cli compose image --filename /var/lib/libvirt/images/fedora-42-test.qcow2 image-uuid
```
### Create VM with No Graphics and use an Existing QCOW2 Disk
```bash
# Start the default network if it isn't already
virsh net-start --network default
# `--location /path/to/image.iso` supplies a disk installer. (Remove `--import`)
# `--import` skips the installation process.
# `--graphics spice --video qxl --channel spicevmc` installs graphics
# `--console pty,target.type=virtio` adds a console connection
# For any command, use `virt-install --arg=?` to see all available options
export VM_NAME="fedora42-test"
export VM_DESCRIPTION="Test VM with Fedora42"
export VM_DISK_PATH="/var/lib/libvirt/images/fedora-42-test.qcow2"
virt-install \
--name "${VM_NAME}" \
--description "${DESCRIPTION}" \
--boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \
--cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \
--ram=8192 \
--os-variant=fedora41 \
--network bridge:virbr0 \
--graphics none \
--console pty,target.type=virtio \
--import --disk "path=${VM_DISK_PATH},bus=virtio"
```
### Create VM with Graphics using an ISO Installation Disk
```bash
# `--cdrom /path/to/image.iso` supplies a disk installer. (Remove `--import`)
# `--import` skips the installation process.
# `--graphics spice --video qxl --channel spicevmc` installs graphics
# `--console pty,target.type=virtio` adds a console connection
# For any command, use `virt-install --arg=?` to see all available options
export VM_NAME="fedora43-kinoite-test"
export VM_DESCRIPTION="Test VM with Fedora43 Kinoite"
export VM_DISK_PATH="/var/lib/libvirt/images/fedora-43-kinoite.qcow2"
export VM_ISO_PATH="/var/lib/libvirt/iso/Fedora-Kinoite-ostree-x86_64-43-1.6.iso"
virt-install \
--name "${VM_NAME}" \
--description "${DESCRIPTION}" \
--boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \
--cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \
--ram=8192 \
--os-variant=fedora41 \
--network bridge:virbr0 \
--graphics spice --video qxl --channel spicevmc \
--cdrom ${VM_ISO_PATH} \
--disk "path=${VM_DISK_PATH},size=64,bus=virtio,format=qcow2"
```
### Create VM using Host Device as Disk
```bash
# `--cdrom /path/to/image.iso` supplies a disk installer. (Remove `--import`)
# `--import` skips the installation process.
# `--graphics spice --video qxl --channel spicevmc` installs graphics
# `--console pty,target.type=virtio` adds a console connection
# `--hostdev 0x1234:0x5678` adds a block storage device
# For any command, use `virt-install --arg=?` to see all available options
export VM_NAME="usb-linux"
export VM_DESCRIPTION="Linux running 0x13fe:0x6500 as the boot drive"
virt-install \
--name "${VM_NAME}" \
--description "${DESCRIPTION}" \
--boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \
--import \
--cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \
--ram=8192 \
--os-variant=fedora41 \
--network bridge:virbr0 \
--graphics spice --video qxl --channel spicevmc \
--hostdev 0x13fe:0x6500,boot.order=1 \
--disk none
```
## Snapshots
See [qemu qcow2 snapshots](/active/software_qemu/qemu.md#qcow2-snapshots)

View File

@@ -1,27 +0,0 @@
# K0s
## Install
```bash
# Install k0s cli
curl -sSLf https://get.k0s.sh | sudo sh
# Setup the config
k0s config create > k0s.yaml
# Install controller/node
k0s install controller -c k0s.yaml --enable-worker
systemctl enable --now k0scontroller
# Make an admin user
mkdir ~/.kube
k0s kubeconfig create --groups "system:masters" admin > ~/.kube/config
```
## Uninstall
```bash
systemctl stop k0scontroller
k0s reset
reboot
```