Compare commits

...

14 Commits

14 changed files with 366 additions and 59 deletions

16
.vscode/vscode.md vendored
View File

@@ -21,11 +21,23 @@ Add to user keyboard settings (json)
## Shell
Edit settings.json
I would recommend using tmux as your default shell. VSCode automatically creates new
sessions when adding or splitting windows. The only small problem is closing out of
vscode and reopening it won't reattach the previous sessions. You'll need to run
`tmux switch-client -t #` to get back to each session you lost.
```json
"terminal.integrated.profiles.osx": {
//...existing profiles...
"tmux-shell": {
"path": "tmux",
}
}
```
```bash
{
"terminal.integrated.defaultProfile.linux": "zsh",
"terminal.integrated.defaultProfile.linux": "tmux",
}
```

View File

@@ -2,30 +2,22 @@
COMMAND_PREFIX=""
# We need to run the podman build on the host. If we're in a toolbox or a distrobox we need to
# spawn the process on the host. Fortunately both toolbox and distrobox provide ways for us to
# do this. We just need to check if "flatpak-spawn" (toolbox) or "distrobox-host-exec" (distrobox)
# exist in the PATH of our environment.
if command -v "distrobox-host-exec" &> /dev/null; then
echo "distrobox detected"
# In distrobox you can run "distrobox-host-exec ./path/to/this/script.sh"
COMMAND_PREFIX="distrobox-host-exec"
elif command -v "flatpak-spawn" &> /dev/null; then
echo "toolbox detected"
# In toolbox you can run "flatpak-spawn --host ./path/to/this/script.sh" to run this on the host
COMMAND_PREFIX="flatpak-spawn --host"
else
echo "already running as host"
fi
# If you haven't already, read up on setting the docker context to use podman as the backend.
# TL;DR
# systemctl --user enable --now podman.socket
# docker context create podman --docker host=unix://$XDG_RUNTIME_DIR/podman/podman.sock
# docker context use podman
# Run the build for the CPU image
$COMMAND_PREFIX podman build \
--no-cache \
docker build \
-t gitea.reeseapps.com/services/arch-toolbox:latest \
-f ./infrastructure/graduated/distoolbox/arch-toolbox.containerfile
-f ./infrastructure/graduated/distoolbox/arch-toolbox.containerfile \
--load \
./infrastructure/graduated/distoolbox
# Run the build for the AMD gpu image
$COMMAND_PREFIX podman build \
--no-cache \
docker build \
-t gitea.reeseapps.com/services/arch-toolbox-amdgpu:latest \
-f ./infrastructure/graduated/distoolbox/arch-amdgpu-toolbox.containerfile
-f ./infrastructure/graduated/distoolbox/arch-amdgpu-toolbox.containerfile \
--load \
./infrastructure/graduated/distoolbox

View File

@@ -29,20 +29,9 @@ export LC_ALL="C.UTF-8"
# AWS
complete -C '/usr/local/bin/aws_completer' aws
# Podman
export CONTAINER_HOST=unix:///run/user/1000/podman/podman.sock
# Go puts binaries here
export PATH="$PATH:$(go env GOBIN):$(go env GOPATH)/bin"
# Nodejs Binaries
export npm_config_prefix="$HOME/.local"
# Pyenv
export PYENV_ROOT="$HOME/.pyenv"
[[ -d $PYENV_ROOT/bin ]] && export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init -)"
# Calculate all folder sizes in current dir
alias {dudir,dud}='du -h --max-depth 1 | sort -h'
# Calculate all file sizes in current dir

View File

@@ -49,6 +49,8 @@ RUN pacman -S --noconfirm \
wine \
# Container engine providing an interface that works similarly to Docker but is container format-agnostic.
podman \
# Docker compatibility. We won't be using the daemon unless you want to.
docker docker-compose docker-buildx \
# Archive utility similar to GNU tar, used to package files into single archive files.
unzip \
# An open source version of cat(1) with syntax highlighting and Git integration.
@@ -92,15 +94,16 @@ RUN pacman -S --noconfirm \
# For distrobox
systemd \
# Code build \
make gcc
make gcc \
# makepkg support
base-devel \
# Fonts, so you never have to think about them again
noto-fonts noto-fonts-cjk noto-fonts-emoji noto-fonts-extra
########################
##### Extra Apps #####
########################
# Install UV, a tool for managing Python environments.
RUN curl -LsSf https://astral.sh/uv/install.sh | sh
# Install AWS CLI version 2.
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && \
unzip -qq awscliv2.zip && \

View File

@@ -6,6 +6,7 @@
- [rpm-ostree](#rpm-ostree)
- [Git, Vim, etc](#git-vim-etc)
- [Libvirt, Qemu, KVM](#libvirt-qemu-kvm)
- [Docker Compose and Docker Buildkit with Rootless Podman](#docker-compose-and-docker-buildkit-with-rootless-podman)
- [Network](#network)
- [Hostname](#hostname)
- [VLAN Setup with nmcli](#vlan-setup-with-nmcli)
@@ -80,6 +81,20 @@ rpm-ostree install virt-manager libvirt
systemctl enable --now libvirtd
```
### Docker Compose and Docker Buildkit with Rootless Podman
Allows you to use podman with full docker-compose compatibility.
<https://emersion.fr/blog/2025/using-podman-compose-and-buildkit/>
```bash
rpm-ostree install docker-compose docker-buildx
reboot
systemctl --user enable --now podman.socket
docker context create podman --docker host=unix://$XDG_RUNTIME_DIR/podman/podman.sock
docker context use podman
```
## Network
### Hostname

View File

@@ -0,0 +1,117 @@
# Manjaro Gaming Server
## Setup
Tested on Truenas with libvirt.
1. Do not boot the instance with a virtual display
2. Pass through usb mouse/keyboard
3. Pass through the graphics card
4. Walk through the installer as normal
5. Reboot into the installation
6. Settings -> Screen Locking -> Never
7. Settings -> Power Management -> When power button pressed: Shut Down
8. Settings -> Power Management -> Turn off screen: Never
9. Settings -> Session -> Desktop Session -> On login, launch apps that were open: Start with an empty session
10. `sudo systemctl enable --now sshd`
## SSH
Make sure you forward 127.0.0.1:47990. That way you can access sunshine on your local machine
at <https://127.0.0.1:47990> for entering the pin.
```conf
Host gamebox2
HostName gamebox2.reeselink.com
User gamer
KeepAlive yes
IdentityFile ~/.ssh/id_gamebox_rsa
LocalForward 47990 127.0.0.1:47990
```
## Basic Apps
```bash
sudo pacman -S vim
sudo flatpak install io.missioncenter.MissionCenter
```
## Install Sunshine
```bash
# Update the system
sudo pacman -Syu
# Install nvidia drivers
sudo mhwd -i pci video-nvidia
# Install sunshine
#The flatpak seems to work well. The arch package keeps breaking due to
#deps. See boost-libs and then libicuuc.so.76.
#<https://docs.lizardbyte.dev/projects/sunshine/latest/md_docs_2getting__started.html#install-system-level>
sudo flatpak install --system flathub dev.lizardbyte.app.Sunshine
flatpak run --command=additional-install.sh dev.lizardbyte.app.Sunshine
systemctl --user enable --now sunshine
sudo reboot
```
If you need to uninstall
```bash
systemctl --user disable --now sunshine
sudo flatpak uninstall --system --delete-data dev.lizardbyte.app.Sunshine
```
## Steam
Play games.
```bash
sudo flatpak install com.valvesoftware.Steam
```
## MangoHud
<https://github.com/flightlessmango/MangoHud?tab=readme-ov-file#flatpak>
Pretty Numbers for your Games.
```bash
sudo flatpak install org.freedesktop.Platform.VulkanLayer.MangoHud
flatpak override --user --env=MANGOHUD=1 com.valvesoftware.Steam
# Allow flatpak apps to read Mangohud config
flatpak override --user --filesystem=xdg-config/MangoHud:ro
```
Edit `~/.config/MangoHud/MangoHud.conf`
```conf
## pre defined presets
# -1 = default
# 0 = no display
# 1 = fps only
# 2 = horizontal view
# 3 = extended
# 4 = high detailed information
preset=2
## Enable most of the toggleable parameters (currently excludes `histogram`)
# full
```
## Snapshot
I would recommend creating a "working install" snapshot you can roll back to.
```bash
# Take a snapshot
zfs list -d 1 enc1/vms
export ZFS_VOL='enc1/vms/Gambox1-z4e0t'
zfs snapshot $ZFS_VOL@manual-$(date --iso-8601)
# Restore a snapshot
zfs list -t snapshot $ZFS_VOL
export ZFS_SNAPSHOT='enc1/vms/Gambox1-z4e0t@init-no-drivers-2025-03-03_05-35'
zfs rollback $ZFS_SNAPSHOT
```

View File

@@ -13,6 +13,8 @@
- [Create and Destroy zfs Datasets](#create-and-destroy-zfs-datasets)
- [Create and send snapshots](#create-and-send-snapshots)
- [Cleaning up old snapshots](#cleaning-up-old-snapshots)
- [Creating and restoring snapshots](#creating-and-restoring-snapshots)
- [Filesystem ACLs](#filesystem-acls)
- [VMs](#vms)
- [Converting zvol to qcow2](#converting-zvol-to-qcow2)
- [Converting qcow2 to zvol](#converting-qcow2-to-zvol)
@@ -160,14 +162,28 @@ Since you can't use `-R` to send encrypted datasets recursively you'll need to u
1. Save the datasets from a pool to a text file:
```bash
zfs list -r -o name <pool> > pool_datasets.txt
export SNAPSHOT='@enc1-hourly-2025-03-05_09-00'
export SEND_POOL=enc1
export RECV_POOL=enc0
export DATASETS_FILE=pool_datasets.txt
zfs list -r -H -o name <pool> > pool_datasets.txt
```
2. Next, remove the prefix of the source pool from the list of datasets. Also remove the source pool itself as well as any duplicate pools in the receiving dataset.
3. Now, run a command like the following:
2. Remove the source pool from the front of all the listed datasets. In vim, for example:
```bash
for i in $(cat nvme_pools.txt); do zfs send -v nvme/$i@manual-2021-10-03_22-34 | zfs recv -x encryption enc0/$i; done
:%s/enc0\//g
```
3. Now you can run the following
```bash
# Dry run
for DATASET in $(cat $DATASETS_FILE); do echo "zfs send -v $POOL/$DATASET$SNAPSHOT | zfs recv $RECV_POOL/$DATASET"; done
# Real thing
for DATASET in $(cat $DATASETS_FILE); do zfs send -v $POOL/$DATASET$SNAPSHOT | zfs recv $RECV_POOL/$DATASET; done
```
### Migrating Properties
@@ -213,20 +229,71 @@ zfs destroy rpool/d1
### Create and send snapshots
```bash
export SEND_DATASET=enc0/vms/gitea-docker-runner-data
export RECV_DATASET=enc0/vms/gitea-docker-runner-data-sparse
# snapshot pool and all children
zfs snapshot -r dataset@now
zfs snapshot -r $SEND_DATASET@now
# send all child snapshots
zfs send -R dataset@snapshot | zfs recv dataset
zfs send -R $SEND_DATASET@now | pv | zfs recv $RECV_DATASET
# use the -w raw flag to send encrypted snapshots
zfs send -R -w dataset@snapshot | zfs recv dataset
zfs send -R -w $SEND_DATASET@snapshot | pv | zfs recv $RECV_DATASET
```
### Cleaning up old snapshots
If you want to delete every snapshot:
```bash
wget https://raw.githubusercontent.com/bahamas10/zfs-prune-snapshots/master/zfs-prune-snapshots
# Just in case, use tmux. This can take a while
tmux
# This pool you want to clean up
export POOL=enc0
# This can be anything, set it to something memorable
export SNAPSHOTS_FILE=enc0_mar2025_snapshots.txt
# Check the number of snapshots in the dataset
zfs list -t snap -r $POOL | wc -l
# Save the list of snapshots to the snapshots file
zfs list -t snap -r -H -o name $POOL > $SNAPSHOTS_FILE
# Check the file
cat $SNAPSHOTS_FILE | less
# Dry run
for SNAPSHOT in $(cat $SNAPSHOTS_FILE); do echo "zfs destroy -v $SNAPSHOT"; done | less
# Real thing
for SNAPSHOT in $(cat $SNAPSHOTS_FILE); do zfs destroy -v $SNAPSHOT; done
```
### Creating and restoring snapshots
```bash
# Take a snapshot
zfs list -d 1 enc1/vms
export ZFS_VOL='enc1/vms/Gambox1-z4e0t'
zfs snapshot $ZFS_VOL@manual-$(date --iso-8601)
# Restore a snapshot
zfs list -t snapshot $ZFS_VOL
export ZFS_SNAPSHOT='enc1/vms/Gambox1-z4e0t@init-no-drivers-2025-03-03_05-35'
zfs rollback $ZFS_SNAPSHOT
```
### Filesystem ACLs
If you see something like "nfs4xdr_winacl: Failed to set default ACL on...":
Dataset -> Dataset details (edit) -> Advanced Options -> ACL Type (inherit)
```bash
# Remove all ACLs
setfacl -b -R /mnt/enc0/smb/media
```
## VMs
@@ -395,6 +462,19 @@ virsh -c "qemu+unix:///system?socket=/run/truenas_libvirt/libvirt-sock" start $V
Sometimes you need to mount zvols onto the truenas host. You can do this with the block device in /dev.
For simple operations:
```bash
export ZVOL_PATH=enc0/vms/gitea-docker-runner-data-sparse
mount --mkdir /dev/zvol/$ZVOL_PATH /tmp/$ZVOL_PATH
# If you need to create a filesystem
fdisk /dev/zvol/$ZVOL_PATH
mkfs.btrfs /dev/zvol/$ZVOL_PATH
```
For bulk operations:
```bash
for path in $(ls /dev/zvol/enc0/dcsi/apps/); do mount --mkdir /dev/zvol/enc0/dcsi/apps/$path /tmp/pvcs/$path; done
for path in $(ls /dev/zvol/enc1/dcsi/apps/); do mount --mkdir /dev/zvol/enc1/dcsi/apps/$path /tmp/pvcs/$path; done

View File

@@ -5,6 +5,8 @@
- [Install](#install)
- [Backup and Restore](#backup-and-restore)
- [Gitea Runners](#gitea-runners)
- [Install](#install-1)
- [Cache Cleanup](#cache-cleanup)
Gitea provides a helm chart [here](https://gitea.com/gitea/helm-chart/). We're not
going to modify much, but we are going to solidify some of the default values in case
@@ -78,6 +80,8 @@ kubectl scale statefulset gitea --replicas 1
<https://docs.gitea.com/next/usage/actions/act-runner/#install-with-the-docker-image>
### Install
```bash
touch config.yaml
@@ -95,3 +99,30 @@ docker run \
--name kube_runner \
-d gitea/act_runner:latest
```
### Cache Cleanup
Each org or project with a package registry will have its own cleanup rules. For example,
services -> settings -> Packages -> Add Cleanup Rule will allow you to create a cleanup
rule for packages stored under the "services" org. These cleanup rules should run automatically.
On the other hand, the docker builder cache will balloon out of control over time. The gitea
docker runner is handled outside of Gitea's context, so you'll need to clean it up yourself.
```bash
# Check used system resources
docker system df
```
You should run something like this on a schedule:
```bash
# Prune the builder cache
docker builder prune -a
```
To run it every day at midnight: `crontab -e`
```bash
0 0 * * * yes | docker builder prune -a
```

View File

@@ -47,6 +47,7 @@ apt install -y cuda-toolkit
apt install -y nvidia-cuda-toolkit
# https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html#generating-a-cdi-specification
# You'll need to run this after every apt update
nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml
# monitor nvidia card
@@ -55,7 +56,7 @@ nvidia-smi
# Create IPv6 Network
# Use the below to generate a quadlet for /etc/containers/systemd/localai.network
# podman run --rm ghcr.io/containers/podlet --install --description "Local AI" \
podman network create --ipv6 --label local-ai local-ai
podman network create --ipv6 --label local-ai systemd-localai
# You might want to mount an external drive here.
mkdir /models
@@ -69,6 +70,11 @@ mkdir ~/.localai
echo $(pwgen --capitalize --numerals --secure 64 1) > ~/.localai/token
export MODEL_DIR=/models
export GPU_CONTAINER_IMAGE=quay.io/go-skynet/local-ai:master-cublas-cuda12-ffmpeg
export CPU_CONTAINER_IMAGE=quay.io/go-skynet/local-ai:master-ffmpeg
podman image pull $GPU_CONTAINER_IMAGE
podman image pull $CPU_CONTAINER_IMAGE
# LOCALAI_SINGLE_ACTIVE_BACKEND will unload the previous model before loading the next one
# LOCALAI_API_KEY will set an API key, omit to run unprotected.
@@ -88,7 +94,7 @@ podman run \
--restart always \
-v $MODEL_DIR:/build/models \
-v localai-tmp:/tmp/generated \
quay.io/go-skynet/local-ai:master-cublas-cuda12-ffmpeg
$GPU_CONTAINER_IMAGE
# The second (8081) will be our frontend. We'll protect it with basic auth.
# Use the below to generate a quadlet for /etc/containers/systemd/local-ai-webui.container
@@ -101,7 +107,7 @@ podman run \
--restart always \
-v $MODEL_DIR:/build/models \
-v localai-tmp:/tmp/generated \
quay.io/go-skynet/local-ai:master-ffmpeg
$CPU_CONTAINER_IMAGE
```
## Running Local AI on Arch with AMD GPU
@@ -252,7 +258,7 @@ client_body_timeout 1h;
client_header_timeout 1h;
proxy_connect_timeout 1h;
proxy_read_timeout 1h;
proxy_send_timeout 1h;s
proxy_send_timeout 1h;
```
Now write your nginx http config files. You'll need two:

View File

@@ -0,0 +1,39 @@
# Tmux
## Commands I keep losing
```bash
# attach if exists, create if not
tmux new-session -A -s name
# list sessions
tmux ls
# switch to another session from within a session
tmux switch-client -t name
# kill a session
tmux kill-session -t name
```
## Example conf
Config is located at `$HOME/.tmux.conf`
```bash
# Set teh default shell
set-option -g default-shell "/bin/zsh"
# Split panes using | and _
bind | split-window -h
bind _ split-window -v
# Remove the split panes % and " bindings
unbind '"'
unbind %
# Set mouse mode (for scrolling)
set -g mouse on
# Press ctrl + r to reload tmux when you update this config
bind r source-file ~/.tmux.conf \; display "Reloaded!"
```

View File

@@ -1,9 +0,0 @@
# Discord Bot
## Installation
- Py-Cord
Intents: "senses"
Scopes: "Permissions"

View File

@@ -0,0 +1,24 @@
# FFMPEG
## DVD Ripping
```bash
# Concat the vob
ffmpeg \
-safe 0 \
-f d \
-i mylist.txt \
-c copy \
~/Downloads/ep1.vob
# Convert the vob
ffmpeg \
-i /run/media/ducoterra/DINOTOPIA_DISC_1/VIDEO_TS/VTS_01_1.VOB \
-vf yadif \
-c:v libx264 \
-preset slow \
-crf 19 \
-c:a aac \
-b:a 256k \
~/Downloads/VTS_01_1.mp4
```

View File

@@ -0,0 +1,2 @@
file '/run/media/ducoterra/DINOTOPIA_DISC_1/VIDEO_TS/VTS_01_1.VOB'
file '/run/media/ducoterra/DINOTOPIA_DISC_1/VIDEO_TS/VTS_01_2.VOB'

View File

@@ -0,0 +1,6 @@
# Magick Image Conversion
```bash
magick input.HEIC output.png
magick input.HEIC output.jpeg
```