Compare commits

..

28 Commits

Author SHA1 Message Date
acf6421b53 add esphome tab5
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 1m15s
2026-03-16 09:55:31 -04:00
875795a409 add pillow as a dep 2026-03-16 09:55:03 -04:00
b9d1c2a9a3 fix minor bug in podman template 2026-03-16 09:54:40 -04:00
6f8b7ffca6 add ai hosts to inventory 2026-03-16 09:54:22 -04:00
cc75227a77 reconfigure software ai stack 2026-03-16 09:54:13 -04:00
9ae82fc3de add keychron notes 2026-03-16 09:53:54 -04:00
92edf49948 add quickstart vm notes to driveripper 2026-03-16 09:53:40 -04:00
25d3a7805c add litellm 2026-03-16 09:53:27 -04:00
eb67191706 add toybox caddy 2026-03-16 09:53:13 -04:00
d51560f979 add bifrost docs 2026-03-16 09:52:57 -04:00
88ecb458e1 tab5 voice assist v0.1
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 1m10s
2026-03-15 18:21:56 -04:00
31739320aa add apple m4 max benchmark
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 1m4s
2026-02-25 16:01:08 -05:00
f70028cf63 init uv project for homelab
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 1m26s
2026-02-25 12:23:17 -05:00
ecf4ae1058 update templates with new names 2026-02-25 12:22:29 -05:00
eff2aa4066 add keys 2026-02-25 12:21:51 -05:00
a53e67653d fix restorecon command 2026-02-25 12:21:31 -05:00
d48b9a66cb add machinectl to fedora43 base osbuild 2026-02-25 12:21:17 -05:00
2c5af8507c add fedora kernel notes 2026-02-25 12:21:01 -05:00
ba66c47719 move ai notes from framework_desktop to software_ai_stack 2026-02-25 12:20:17 -05:00
da0b06768e add gpu passthrough notes to driveripper 2026-02-25 12:19:46 -05:00
1c6e1b7032 add container_rabbitmq 2026-02-25 12:19:06 -05:00
087d8888cf remove private nginx.conf 2026-02-25 12:18:44 -05:00
cb486ae289 move gitea port to 22 2026-02-25 12:17:39 -05:00
cd56318ab0 add container_elk notes 2026-02-25 12:16:30 -05:00
416321206d add caddy waf docs 2026-02-25 12:15:49 -05:00
f3c313e610 update bricktracker to 1.3.1
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 1m8s
2026-02-16 11:16:09 -05:00
52c6dac263 transition to container_ over podman_ 2026-02-16 10:44:38 -05:00
d4fbbb185f rename podman_ projects to container_ 2026-02-11 11:34:02 -05:00
162 changed files with 4354 additions and 816 deletions

View File

@@ -3,10 +3,10 @@ run-name: Build and Push the Custom Caddy Image with Route53 DNS Certbot
on: on:
push: push:
paths: paths:
- active/podman_caddy/** - active/container_caddy/**
- .gitea/workflows/caddy.yaml - .gitea/workflows/caddy.yaml
schedule: schedule:
- cron: '@daily' - cron: "@daily"
jobs: jobs:
build-and-push-ddns: build-and-push-ddns:
runs-on: ubuntu-latest runs-on: ubuntu-latest
@@ -23,8 +23,8 @@ jobs:
- name: Build and push Docker image - name: Build and push Docker image
uses: https://github.com/docker/build-push-action@v5 uses: https://github.com/docker/build-push-action@v5
with: with:
context: ${{ gitea.workspace }}/active/podman_caddy context: ${{ gitea.workspace }}/active/container_caddy
file: ${{ gitea.workspace }}/active/podman_caddy/Containerfile file: ${{ gitea.workspace }}/active/container_caddy/Containerfile
push: true push: true
tags: "gitea.reeseapps.com/services/caddy:latest,gitea.reeseapps.com/services/caddy:${{gitea.sha}}" tags: "gitea.reeseapps.com/services/caddy:latest,gitea.reeseapps.com/services/caddy:${{gitea.sha}}"
no-cache: true no-cache: true

View File

@@ -3,10 +3,10 @@ run-name: Build and Push the Podman DDNS Image
on: on:
push: push:
paths: paths:
- active/podman_ddns/** - active/container_ddns/**
- .gitea/workflows/ddns.yaml - .gitea/workflows/ddns.yaml
schedule: schedule:
- cron: '@daily' - cron: "@daily"
jobs: jobs:
build-and-push-ddns: build-and-push-ddns:
runs-on: ubuntu-latest runs-on: ubuntu-latest
@@ -23,8 +23,8 @@ jobs:
- name: Build and push Docker image - name: Build and push Docker image
uses: https://github.com/docker/build-push-action@v5 uses: https://github.com/docker/build-push-action@v5
with: with:
context: ${{ gitea.workspace }}/active/podman_ddns context: ${{ gitea.workspace }}/active/container_ddns
file: ${{ gitea.workspace }}/active/podman_ddns/Containerfile file: ${{ gitea.workspace }}/active/container_ddns/Containerfile
push: true push: true
tags: "gitea.reeseapps.com/services/ddns:latest,gitea.reeseapps.com/services/ddns:${{gitea.sha}}" tags: "gitea.reeseapps.com/services/ddns:latest,gitea.reeseapps.com/services/ddns:${{gitea.sha}}"
no-cache: true no-cache: true

1
.gitignore vendored
View File

@@ -11,3 +11,4 @@ TODO.md
eicar.com eicar.com
*.pp *.pp
*.mod *.mod
*.log

View File

@@ -40,6 +40,7 @@ or give me access to your servers.
- ["find ." shortcuts](#find--shortcuts) - ["find ." shortcuts](#find--shortcuts)
- [tmux](#tmux) - [tmux](#tmux)
- [bash](#bash) - [bash](#bash)
- [Bulk File/Folder Renaming](#bulk-filefolder-renaming)
- [SSH Setup](#ssh-setup) - [SSH Setup](#ssh-setup)
- [Git GPG Commit Signing](#git-gpg-commit-signing) - [Git GPG Commit Signing](#git-gpg-commit-signing)
- [Important Dates and Times](#important-dates-and-times) - [Important Dates and Times](#important-dates-and-times)
@@ -91,15 +92,15 @@ find . -type d -exec chmod 755 {} \;
Here are some handy references for default bash variables Here are some handy references for default bash variables
```text ```text
$0 The name of the script being executed. `$0` The name of the script being executed.
$1-$9 The first nine command-line arguments. `$1-$9` The first nine command-line arguments.
$# The number of command-line arguments. `$#` The number of command-line arguments.
$* All command-line arguments as a single string. `$*` All command-line arguments as a single string.
$@ All command-line arguments as an array. `$@` All command-line arguments as an array.
$? The exit status of the last executed command. `$?` The exit status of the last executed command.
$$ The process ID of the current shell. `$$` The process ID of the current shell.
$! The process ID of the last background command. `$!` The process ID of the last background command.
$- Shows the current shell options or flags. `$-` Shows the current shell options or flags.
``` ```
And here are the meanings of the shell options And here are the meanings of the shell options
@@ -118,6 +119,15 @@ So to check if you are in an interactive shell:
[ $- == *i* ]] && Some command here [ $- == *i* ]] && Some command here
``` ```
### Bulk File/Folder Renaming
```bash
for change_dir in $(ls | grep 'podman_*'); do
new_name=$(echo $change_dir | sed 's/podman_/container_/g')
mv $change_dir $new_name`;
done
```
## SSH Setup ## SSH Setup
Generate a key (password protect it!) Generate a key (password protect it!)
@@ -198,7 +208,7 @@ signed you'll see an output. If not, nothing will show.
## Important Dates and Times ## Important Dates and Times
| Time | Day | Description | | Time | Day | Description |
| ----- | -------- | ---------------------------------- | | ----- | --- | ---------------------------------- |
| 00:00 | All | Automated builds | | 00:00 | All | Automated builds |
| 00:00 | All | NAS Snapshots | | 00:00 | All | NAS Snapshots |
| 02:00 | All | Backups | | 02:00 | All | Backups |
@@ -206,7 +216,6 @@ signed you'll see an output. If not, nothing will show.
| 05:00 | All | VM Server Security Updates | | 05:00 | All | VM Server Security Updates |
| 05:00 | All | Unifi Protect Firmware Updates | | 05:00 | All | Unifi Protect Firmware Updates |
| 06:00 | All | Unifi Network Firmware Updates | | 06:00 | All | Unifi Network Firmware Updates |
| 06:00 | Saturday | Truenas Disk Scrub |
## Project Lifecycle ## Project Lifecycle
@@ -296,7 +305,7 @@ Assuming your project name is `my-project` and it runs on `podman`
1. Create a new directory called `podman_my-project` under the `active` 1. Create a new directory called `podman_my-project` under the `active`
directory directory
2. Copy the readme template: `cp project_readme_template.md 2. Copy the readme template: `cp project_readme_template.md
active/podman_my-project/my-project.md` active/container_my-project/my-project.md`
3. Populate `my-project.md` as you work through the install process 3. Populate `my-project.md` as you work through the install process
4. Create a directory called `secrets` in `podman_my-project`. This will be 4. Create a directory called `secrets` in `podman_my-project`. This will be
automatically gitignored. Put all secrets here. automatically gitignored. Put all secrets here.

View File

@@ -0,0 +1,208 @@
# Podman bifrost
- [Podman bifrost](#podman-bifrost)
- [Setup bifrost Project](#setup-bifrost-project)
- [Install bifrost](#install-bifrost)
- [Create the ai user](#create-the-ai-user)
- [Write the bifrost compose spec](#write-the-bifrost-compose-spec)
- [A Note on Volumes](#a-note-on-volumes)
- [Convert bifrost compose spec to quadlets](#convert-bifrost-compose-spec-to-quadlets)
- [Start and enable your systemd quadlet](#start-and-enable-your-systemd-quadlet)
- [Expose bifrost](#expose-bifrost)
- [Using bifrost](#using-bifrost)
- [Adding Models](#adding-models)
- [Testing Models](#testing-models)
- [Backup bifrost](#backup-bifrost)
- [Upgrade bifrost](#upgrade-bifrost)
- [Upgrade Quadlets](#upgrade-quadlets)
- [Uninstall](#uninstall)
- [Notes](#notes)
- [SELinux](#selinux)
## Setup bifrost Project
- [ ] Copy and rename this folder to active/container_bifrost
- [ ] Find and replace bifrost with the name of the service.
- [ ] Create the rootless user to run the podman containers
- [ ] Write the compose.yaml spec for your service
- [ ] Convert the compose.yaml spec to a quadlet
- [ ] Install the quadlet on the podman server
- [ ] Expose the quadlet service
- [ ] Install a backup service and timer
## Install bifrost
### Create the ai user
```bash
# SSH into your podman server as root
useradd ai
loginctl enable-linger $(id -u ai)
systemctl --user --machine=ai@.host enable podman-restart
systemctl --user --machine=ai@.host enable --now podman.socket
mkdir -p /home/ai/.config/containers/systemd
```
### Write the bifrost compose spec
Edit the compose.yaml at active/container_bifrost/compose/compose.yaml
#### A Note on Volumes
Named volumes are stored at `/home/bifrost/.local/share/containers/storage/volumes/`.
### Convert bifrost compose spec to quadlets
Run the following to convert a compose.yaml into the various `.container` files for systemd:
```bash
# Generate the systemd service
podman run \
--security-opt label=disable \
--rm \
-v $(pwd)/active/container_bifrost/compose:/compose \
-v $(pwd)/active/container_bifrost/quadlets:/quadlets \
quay.io/k9withabone/podlet \
-f /quadlets \
-i \
--overwrite \
compose /compose/compose.yaml
# Copy the files to the server
export PODMAN_SERVER=ai-ai
scp -r active/container_bifrost/quadlets/. $PODMAN_SERVER:/home/ai/.config/containers/systemd/
```
### Start and enable your systemd quadlet
SSH into your podman server as root:
```bash
systemctl --user daemon-reload
systemctl --user restart bifrost
journalctl --user -u bifrost -f
# Enable auto-update service which will pull new container images automatically every day
systemctl --user enable --now podman-auto-update.timer
```
### Expose bifrost
1. If you need a domain, follow the [DDNS instructions](/active/container_ddns/ddns.md#install-a-new-ddns-service)
2. For a web service, follow the [Caddy instructions](/active/container_caddy/caddy.md#adding-a-new-caddy-record)
3. Finally, follow your OS's guide for opening ports via its firewall service.
## Using bifrost
### Adding Models
```json
// qwen3.5-35b-a3b-thinking
{
"temperature": 1,
"top_p": 0.95,
"presence_penalty": 1.5,
"extra_body": {
"top_k": 20,
"min_p": 0,
"repetition_penalty": 1,
"chat_template_kwargs": {
"enable_thinking": true
}
}
}
// qwen3.5-35b-a3b-coding
{
"temperature": 0.6,
"top_p": 0.95,
"presence_penalty": 0,
"extra_body": {
"top_k": 20,
"min_p": 0,
"repetition_penalty": 1,
"chat_template_kwargs": {
"enable_thinking": true
}
}
}
// qwen3.5-35b-a3b-instruct
{
"temperature": 0.7,
"top_p": 0.8,
"presence_penalty": 1.5,
"extra_body": {
"top_k": 20,
"min_p": 0,
"repetition_penalty": 1,
"chat_template_kwargs": {
"enable_thinking": false
}
}
}
```
### Testing Models
```bash
# List models
curl -L -X GET 'https://aipi.reeseapps.com/v1/models' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer sk-1234'
curl -L -X POST 'https://aipi.reeseapps.com/v1/chat/completions' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer sk-1234' \
-d '{
"model": "gpt-4o-mini", # 👈 REPLACE with 'public model name' for any db-model
"messages": [
{
"content": "Hey, how's it going",
"role": "user"
}
],
}'
```
## Backup bifrost
Follow the [Borg Backup instructions](/active/systemd_borg/borg.md#set-up-a-client-for-backup)
## Upgrade bifrost
### Upgrade Quadlets
Upgrades should be a repeat of [writing the compose spec](#convert-bifrost-compose-spec-to-quadlets) and [installing the quadlets](#start-and-enable-your-systemd-quadlet)
```bash
export PODMAN_SERVER=
scp -r quadlets/. $PODMAN_SERVER$:/home/bifrost/.config/containers/systemd/
ssh bifrost systemctl --user daemon-reload
ssh bifrost systemctl --user restart bifrost
```
## Uninstall
```bash
# Stop the user's services
systemctl --user disable podman-restart
podman container stop --all
systemctl --user disable --now podman.socket
systemctl --user disable --now podman-auto-update.timer
# Delete the user (this won't delete their home directory)
# userdel might spit out an error like:
# userdel: user bifrost is currently used by process 591255
# kill those processes and try again
userdel bifrost
```
## Notes
### SELinux
<https://blog.christophersmart.com/2021/01/31/podman-volumes-and-selinux/>
:z allows a container to share a mounted volume with all other containers.
:Z allows a container to reserve a mounted volume and prevents any other container from accessing.

View File

@@ -0,0 +1,32 @@
services:
bifrost:
image: docker.io/maximhq/bifrost:latest
container_name: bifrost
ports:
- "8000:8000"
volumes:
- bifrost-data:/app/data
environment:
- APP_PORT=8000
- APP_HOST=0.0.0.0
- LOG_LEVEL=info
- LOG_STYLE=json
ulimits:
nofile:
soft: 65536
hard: 65536
healthcheck:
test:
[
"CMD",
"wget",
"--no-verbose",
"--tries=1",
"-O",
"/dev/null",
"http://localhost:8080/health",
]
interval: 30s
timeout: 10s
retries: 3
restart: unless-stopped

View File

@@ -0,0 +1,17 @@
[Container]
ContainerName=bifrost
Environment=APP_PORT=8000 APP_HOST=0.0.0.0 LOG_LEVEL=info LOG_STYLE=json
HealthCmd=["wget", "--no-verbose", "--tries=1", "-O", "/dev/null", "http://localhost:8080/health"]
HealthInterval=30s
HealthRetries=3
HealthTimeout=10s
Image=docker.io/maximhq/bifrost:latest
PublishPort=8000:8000
Ulimit=nofile=65536:65536
Volume=bifrost-data:/app/data
[Service]
Restart=always
[Install]
WantedBy=default.target

View File

@@ -2,6 +2,14 @@
<https://gitea.baerentsen.space/FrederikBaerentsen/BrickTracker/src/branch/master/docs/quickstart.md> <https://gitea.baerentsen.space/FrederikBaerentsen/BrickTracker/src/branch/master/docs/quickstart.md>
## Update
```bash
scp active/container_bricktracker/connorbricktracker-compose.yaml bricktracker:
ssh bricktracker
docker compose -f connorbricktracker-compose.yaml up -d
```
## Setup ## Setup
### Create the bricktracker user ### Create the bricktracker user
@@ -34,7 +42,7 @@ mkdir -p /home/bricktracker/.config/containers/systemd
bricktracker: bricktracker:
container_name: BrickTracker container_name: BrickTracker
restart: unless-stopped restart: unless-stopped
image: gitea.baerentsen.space/frederikbaerentsen/bricktracker:1.2.2 image: gitea.baerentsen.space/frederikbaerentsen/bricktracker:1.3.1
ports: ports:
- "3333:3333" - "3333:3333"
volumes: volumes:

View File

@@ -2,7 +2,7 @@ services:
bricktracker: bricktracker:
container_name: BrickTracker container_name: BrickTracker
restart: unless-stopped restart: unless-stopped
image: gitea.baerentsen.space/frederikbaerentsen/bricktracker:1.2.2 image: gitea.baerentsen.space/frederikbaerentsen/bricktracker:1.3.1
ports: ports:
- "3333:3333" - "3333:3333"
volumes: volumes:

View File

@@ -1,7 +1,9 @@
FROM docker.io/caddy:2-builder AS builder FROM docker.io/caddy:2-builder AS builder
RUN xcaddy build \ RUN xcaddy build \
--with github.com/caddy-dns/route53@v1.6.0 --with github.com/caddy-dns/route53@v1.6.0 \
--with github.com/fabriziosalmi/caddy-waf
FROM docker.io/caddy:2 FROM docker.io/caddy:2

View File

@@ -6,6 +6,8 @@
- [Ansible](#ansible) - [Ansible](#ansible)
- [Manual](#manual) - [Manual](#manual)
- [Adding a new Caddy Record](#adding-a-new-caddy-record) - [Adding a new Caddy Record](#adding-a-new-caddy-record)
- [Logs](#logs)
- [Caddy WAF](#caddy-waf)
## Custom Caddy Image ## Custom Caddy Image
@@ -62,15 +64,20 @@ Now you can install the Caddy service with something like:
# Base Proxy # Base Proxy
ansible-playbook \ ansible-playbook \
-i ansible/inventory.yaml \ -i ansible/inventory.yaml \
active/podman_caddy/install_caddy_proxy.yaml active/container_caddy/install_caddy_proxy.yaml
# Deskwork (AI) Proxy # Deskwork (AI) Proxy
ansible-playbook \ ansible-playbook \
-i ansible/inventory.yaml \ -i ansible/inventory.yaml \
active/podman_caddy/install_caddy_deskwork.yaml active/container_caddy/install_caddy_deskwork.yaml
# Toybox (AI) Proxy
ansible-playbook \
-i ansible/inventory.yaml \
active/container_caddy/install_caddy_toybox.yaml
``` ```
See ansible playbook [install_caddy.yaml](/active/podman_caddy/install_caddy.yaml) See ansible playbook [install_caddy.yaml](/active/container_caddy/install_caddy.yaml)
### Manual ### Manual
@@ -135,6 +142,69 @@ Before you can create a Caddyfile you need records that point to your server.
You can either create them manually in your DNS provider of choice or use the provided You can either create them manually in your DNS provider of choice or use the provided
ddns service: ddns service:
1. Update the [ddns caddy records](/active/podman_ddns/secrets/caddy_records.yaml) 1. Update the [ddns caddy records](/active/container_ddns/secrets/caddy_records.yaml)
2. (Optional) Update the Caddyfile at `active/podman_caddy/secrets/Caddyfile` 2. (Optional) Update the Caddyfile at `active/container_caddy/secrets/Caddyfile`
3. Run the [caddy ansible playbook](/active/podman_caddy/caddy.md#install-caddy) 3. Run the [caddy ansible playbook](/active/container_caddy/caddy.md#install-caddy)
## Logs
```bash
# Follow remote connections
podman logs -f caddy | grep -e '^{' | jq -c '.request | {remote_ip,host}'
# Filter out noisy hosts
podman logs -f caddy | grep -e '^{' | jq -c '.request | {remote_ip,host} | select(.host != "gitea.reeseapps.com")'
# Focus on user agents
podman logs -f caddy | grep -e '^{' | jq -c '
{
"User-Agent": .request.headers["User-Agent"],
remote_ip: .request.remote_ip,
host: .request.host,
status: .status
}
'
```
## Caddy WAF
<https://github.com/fabriziosalmi/caddy-waf>
1. Copy the rules.json to `/etc/caddy/rules.json`
2. Update the Caddyfile to something like this:
```Caddyfile
gitea.reeseapps.com:443 {
log {
output stdout
format json {
message_key msg # Key for the log message
level_key severity # Key for the log level
time_key timestamp # Key for the timestamp
name_key logger # Key for the logger name
caller_key function # Key for the caller information
stacktrace_key stack # Key for error stacktraces
time_format "2006-01-02 15:04:05 MST" # RFC3339-like format
time_local # Use local timezone
duration_format "ms" # Show durations in milliseconds
level_format "upper" # Uppercase log levels
}
}
route {
waf {
metrics_endpoint /waf_metrics
rule_file rules.json
}
@wafmetrics {
path /waf_metrics
}
handle @wafmetrics { } # empty → let the WAF serve the metrics
handle {
reverse_proxy gitea.reeselink.com:3000
}
}
}
```

View File

@@ -0,0 +1,45 @@
- name: Create Caddy Proxy
hosts: caddy
tasks:
- name: Copy Containerfile for build
template:
src: Containerfile
dest: /etc/caddy/Containerfile
owner: root
group: root
mode: "0644"
- name: Build Caddy Image
shell:
cmd: podman build -t gitea.reeseapps.com/services/caddy:latest -f /etc/caddy/Containerfile
- name: Create /etc/caddy dir
ansible.builtin.file:
path: /etc/caddy
state: directory
mode: "0755"
- name: Copy Caddyfile
template:
src: secrets/proxy.Caddyfile
dest: /etc/caddy/Caddyfile
owner: root
group: root
mode: "0644"
- name: Copy rules.json
template:
src: rules.json
dest: /etc/caddy/rules.json
owner: root
group: root
mode: "0644"
- name: Template Caddy Container Services
template:
src: caddy.container
dest: /etc/containers/systemd/caddy.container
owner: root
group: root
mode: "0644"
- name: Reload and start the Caddy service
ansible.builtin.systemd_service:
state: restarted
name: caddy.service
enabled: true
daemon_reload: true

View File

@@ -0,0 +1,28 @@
- name: Create Caddy Proxy
hosts: toybox-root
tasks:
- name: Create /etc/caddy dir
ansible.builtin.file:
path: /etc/caddy
state: directory
mode: "0755"
- name: Copy Caddyfile
template:
src: secrets/toybox.Caddyfile
dest: /etc/caddy/Caddyfile
owner: root
group: root
mode: "0644"
- name: Template Caddy Container Services
template:
src: caddy.container
dest: /etc/containers/systemd/caddy.container
owner: root
group: root
mode: "0644"
- name: Reload and start the Caddy service
ansible.builtin.systemd_service:
state: restarted
name: caddy.service
enabled: true
daemon_reload: true

View File

@@ -0,0 +1,26 @@
[
{
"id": "block-scanners",
"phase": 1,
"pattern": "(?i)(nikto|sqlmap|nmap|acunetix|nessus|openvas|wpscan|dirbuster|burpsuite|owasp zap|netsparker|appscan|arachni|skipfish|gobuster|wfuzz|hydra|metasploit|nessus|openvas|qualys|zap|w3af|openwebspider|netsparker|appspider|rapid7|nessus|qualys|nuclei|zgrab|vega|gospider|gxspider|whatweb|xspider|joomscan|uniscan|blindelephant)",
"targets": [
"HEADERS:User-Agent"
],
"severity": "CRITICAL",
"action": "block",
"score": 10,
"description": "Block traffic from known vulnerability scanners and penetration testing tools. Includes more scanners."
},
{
"id": "block-crawlers",
"phase": 1,
"pattern": "(meta-externalagent)",
"targets": [
"HEADERS:User-Agent"
],
"severity": "CRITICAL",
"action": "block",
"score": 10,
"description": "Block traffic from web scrapers and crawlers."
}
]

View File

@@ -20,7 +20,7 @@
## Setup certbot Project ## Setup certbot Project
- [ ] Copy and rename this folder to active/podman_certbot - [ ] Copy and rename this folder to active/container_certbot
- [ ] Find and replace certbot with the name of the service. - [ ] Find and replace certbot with the name of the service.
- [ ] Create the rootless user to run the podman containers - [ ] Create the rootless user to run the podman containers
- [ ] Write the compose.yaml spec for your service - [ ] Write the compose.yaml spec for your service
@@ -65,8 +65,8 @@ Run the following to convert a compose.yaml into the various `.container` files
podman run \ podman run \
--security-opt label=disable \ --security-opt label=disable \
--rm \ --rm \
-v $(pwd)/active/podman_certbot/:/compose \ -v $(pwd)/active/container_certbot/:/compose \
-v $(pwd)/active/podman_certbot/quadlets:/quadlets \ -v $(pwd)/active/container_certbot/quadlets:/quadlets \
quay.io/k9withabone/podlet \ quay.io/k9withabone/podlet \
-f /quadlets \ -f /quadlets \
-i \ -i \
@@ -75,7 +75,7 @@ compose /compose/compose.yaml
# Copy the files to the server # Copy the files to the server
export PODMAN_SERVER= export PODMAN_SERVER=
scp -r active/podman_certbot/quadlets/. $PODMAN_SERVER:/home/certbot/.config/containers/systemd/ scp -r active/container_certbot/quadlets/. $PODMAN_SERVER:/home/certbot/.config/containers/systemd/
ssh $PODMAN_SERVER chown -R certbot:certbot /home/certbot/.config/containers/systemd/ ssh $PODMAN_SERVER chown -R certbot:certbot /home/certbot/.config/containers/systemd/
``` ```
@@ -107,8 +107,8 @@ systemctl --user enable --now podman-auto-update.timer
### Expose certbot ### Expose certbot
1. If you need a domain, follow the [DDNS instructions](/active/podman_ddns/ddns.md#install-a-new-ddns-service) 1. If you need a domain, follow the [DDNS instructions](/active/container_ddns/ddns.md#install-a-new-ddns-service)
2. For a web service, follow the [Caddy instructions](/active/podman_caddy/caddy.md#adding-a-new-caddy-record) 2. For a web service, follow the [Caddy instructions](/active/container_caddy/caddy.md#adding-a-new-caddy-record)
3. Finally, follow your OS's guide for opening ports via its firewall service. 3. Finally, follow your OS's guide for opening ports via its firewall service.
#### firewalld #### firewalld

View File

@@ -0,0 +1 @@
3.13

View File

@@ -59,7 +59,7 @@ Now you can install the DDNS service with something like:
ansible-playbook \ ansible-playbook \
-i ansible/inventory.yaml \ -i ansible/inventory.yaml \
-l proxy \ -l proxy \
active/podman_ddns/install_ddns.yaml active/container_ddns/install_ddns.yaml
``` ```
See ansible playbook [install_ddns.yaml](/install_ddns.yaml) See ansible playbook [install_ddns.yaml](/install_ddns.yaml)
@@ -75,8 +75,8 @@ multiple servers. If you have a podman server, it'll have its own
ansible-playbook \ ansible-playbook \
-i ansible/inventory.yaml \ -i ansible/inventory.yaml \
-l caddy \ -l caddy \
active/podman_ddns/install_ddns.yaml \ active/container_ddns/install_ddns.yaml \
-e "@active/podman_ddns/secrets/records.yaml" -e "@active/container_ddns/secrets/records.yaml"
``` ```
## Development ## Development

41
active/container_elk/.env Normal file
View File

@@ -0,0 +1,41 @@
# Project namespace (defaults to the current folder name if not set)
#COMPOSE_PROJECT_NAME=myproject
# Password for the 'elastic' user (at least 6 characters)
ELASTIC_PASSWORD=changeme
# Password for the 'kibana_system' user (at least 6 characters)
KIBANA_PASSWORD=changeme
# Version of Elastic products
STACK_VERSION=8.7.1
# Set the cluster name
CLUSTER_NAME=docker-cluster
# Set to 'basic' or 'trial' to automatically start the 30-day trial
LICENSE=basic
#LICENSE=trial
# Port to expose Elasticsearch HTTP API to the host
ES_PORT=9200
# Port to expose Kibana to the host
KIBANA_PORT=5601
# Increase or decrease based on the available host memory (in bytes)
ES_MEM_LIMIT=1073741824
KB_MEM_LIMIT=1073741824
LS_MEM_LIMIT=1073741824
# SAMPLE Predefined Key only to be used in POC environments
ENCRYPTION_KEY=c34d38b3a14956121ff2170e5030b471551370178f43e5626eec58b04a30fae2

View File

@@ -0,0 +1,219 @@
version: "3.8"
volumes:
certs:
driver: local
esdata01:
driver: local
kibanadata:
driver: local
metricbeatdata01:
driver: local
filebeatdata01:
driver: local
logstashdata01:
driver: local
networks:
default:
name: elastic
external: false
services:
setup:
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
volumes:
- certs:/usr/share/elasticsearch/config/certs
user: "0"
command: >
bash -c '
if [ x${ELASTIC_PASSWORD} == x ]; then
echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
exit 1;
elif [ x${KIBANA_PASSWORD} == x ]; then
echo "Set the KIBANA_PASSWORD environment variable in the .env file";
exit 1;
fi;
if [ ! -f config/certs/ca.zip ]; then
echo "Creating CA";
bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
unzip config/certs/ca.zip -d config/certs;
fi;
if [ ! -f config/certs/certs.zip ]; then
echo "Creating certs";
echo -ne \
"instances:\n"\
" - name: es01\n"\
" dns:\n"\
" - es01\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
" - name: kibana\n"\
" dns:\n"\
" - kibana\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
> config/certs/instances.yml;
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
fi;
echo "Setting file permissions"
chown -R root:root config/certs;
find . -type d -exec chmod 750 \{\} \;;
find . -type f -exec chmod 640 \{\} \;;
echo "Waiting for Elasticsearch availability";
until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
echo "Setting kibana_system password";
until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
echo "All done!";
'
healthcheck:
test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
interval: 1s
timeout: 5s
retries: 120
es01:
depends_on:
setup:
condition: service_healthy
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
labels:
co.elastic.logs/module: elasticsearch
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata01:/usr/share/elasticsearch/data
ports:
- ${ES_PORT}:9200
environment:
- node.name=es01
- cluster.name=${CLUSTER_NAME}
- discovery.type=single-node
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es01/es01.key
- xpack.security.http.ssl.certificate=certs/es01/es01.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es01/es01.key
- xpack.security.transport.ssl.certificate=certs/es01/es01.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
mem_limit: ${ES_MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
kibana:
depends_on:
es01:
condition: service_healthy
image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
labels:
co.elastic.logs/module: kibana
volumes:
- certs:/usr/share/kibana/config/certs
- kibanadata:/usr/share/kibana/data
ports:
- ${KIBANA_PORT}:5601
environment:
- SERVERNAME=kibana
- ELASTICSEARCH_HOSTS=https://es01:9200
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
- XPACK_SECURITY_ENCRYPTIONKEY=${ENCRYPTION_KEY}
- XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY=${ENCRYPTION_KEY}
- XPACK_REPORTING_ENCRYPTIONKEY=${ENCRYPTION_KEY}
mem_limit: ${KB_MEM_LIMIT}
healthcheck:
test:
[
"CMD-SHELL",
"curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
retries: 120
metricbeat01:
depends_on:
es01:
condition: service_healthy
kibana:
condition: service_healthy
image: docker.elastic.co/beats/metricbeat:${STACK_VERSION}
user: root
volumes:
- certs:/usr/share/metricbeat/certs
- metricbeatdata01:/usr/share/metricbeat/data
- "./metricbeat.yaml:/usr/share/metricbeat/metricbeat.yml:ro"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "/sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro"
- "/proc:/hostfs/proc:ro"
- "/:/hostfs:ro"
environment:
- ELASTIC_USER=elastic
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- ELASTIC_HOSTS=https://es01:9200
- KIBANA_HOSTS=http://kibana:5601
- LOGSTASH_HOSTS=http://logstash01:9600
filebeat01:
depends_on:
es01:
condition: service_healthy
image: docker.elastic.co/beats/filebeat:${STACK_VERSION}
user: root
volumes:
- certs:/usr/share/filebeat/certs
- filebeatdata01:/usr/share/filebeat/data
- "./filebeat_ingest_data/:/usr/share/filebeat/ingest_data/"
- "./filebeat.yaml:/usr/share/filebeat/filebeat.yml:ro"
- "/var/lib/docker/containers:/var/lib/docker/containers:ro"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
environment:
- ELASTIC_USER=elastic
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- ELASTIC_HOSTS=https://es01:9200
- KIBANA_HOSTS=http://kibana:5601
- LOGSTASH_HOSTS=http://logstash01:9600
logstash01:
depends_on:
es01:
condition: service_healthy
kibana:
condition: service_healthy
image: docker.elastic.co/logstash/logstash:${STACK_VERSION}
labels:
co.elastic.logs/module: logstash
user: root
volumes:
- certs:/usr/share/logstash/certs
- logstashdata01:/usr/share/logstash/data
- "./logstash_ingest_data/:/usr/share/logstash/ingest_data/"
- "./logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro"
environment:
- xpack.monitoring.enabled=false
- ELASTIC_USER=elastic
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- ELASTIC_HOSTS=https://es01:9200

View File

@@ -0,0 +1,14 @@
# Elk Stack
## Install
<https://www.elastic.co/blog/getting-started-with-the-elastic-stack-and-docker-compose>
```bash
# Copy over the files
scp -rp active/container_elk/. elk:elk
# SSH into the host
ssh -t elk "cd elk ; bash --login"
# Run the services
docker compose -f elk-compose.yaml up
```

View File

@@ -0,0 +1,29 @@
filebeat.inputs:
- type: filestream
id: default-filestream
paths:
- ingest_data/*.log
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
processors:
- add_docker_metadata: ~
setup.kibana:
host: ${KIBANA_HOSTS}
username: ${ELASTIC_USER}
password: ${ELASTIC_PASSWORD}
output.elasticsearch:
hosts: ${ELASTIC_HOSTS}
username: ${ELASTIC_USER}
password: ${ELASTIC_PASSWORD}
ssl.enabled: true
ssl.certificate_authorities: "certs/ca/ca.crt"

View File

@@ -0,0 +1,24 @@
input {
file {
#https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html
#default is TAIL which assumes more data will come into the file.
#change to mode => "read" if the file is a compelte file. by default, the file will be removed once reading is complete -- backup your files if you need them.
mode => "tail"
path => "/usr/share/logstash/ingest_data/*"
}
}
filter {
}
output {
elasticsearch {
index => "logstash-%{+YYYY.MM.dd}"
hosts=> "${ELASTIC_HOSTS}"
user=> "${ELASTIC_USER}"
password=> "${ELASTIC_PASSWORD}"
cacert=> "certs/ca/ca.crt"
}
}

View File

@@ -0,0 +1,62 @@
metricbeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
metricbeat.modules:
- module: elasticsearch
xpack.enabled: true
period: 10s
hosts: ${ELASTIC_HOSTS}
ssl.certificate_authorities: "certs/ca/ca.crt"
ssl.certificate: "certs/es01/es01.crt"
ssl.key: "certs/es01/es01.key"
username: ${ELASTIC_USER}
password: ${ELASTIC_PASSWORD}
ssl.enabled: true
- module: logstash
xpack.enabled: true
period: 10s
hosts: ${LOGSTASH_HOSTS}
- module: kibana
metricsets:
- stats
period: 10s
hosts: ${KIBANA_HOSTS}
username: ${ELASTIC_USER}
password: ${ELASTIC_PASSWORD}
xpack.enabled: true
- module: docker
metricsets:
- "container"
- "cpu"
- "diskio"
- "healthcheck"
- "info"
#- "image"
- "memory"
- "network"
hosts: ["unix:///var/run/docker.sock"]
period: 10s
enabled: true
processors:
- add_host_metadata: ~
- add_docker_metadata: ~
output.elasticsearch:
hosts: ${ELASTIC_HOSTS}
username: ${ELASTIC_USER}
password: ${ELASTIC_PASSWORD}
ssl:
certificate: "certs/es01/es01.crt"
certificate_authorities: "certs/ca/ca.crt"
key: "certs/es01/es01.key"

View File

@@ -24,7 +24,7 @@ services:
- /etc/localtime:/etc/localtime:ro - /etc/localtime:/etc/localtime:ro
ports: ports:
- "3000:3000" - "3000:3000"
- "2222:22" - "22:22"
depends_on: depends_on:
- db - db

View File

@@ -21,11 +21,14 @@
Prereqs Prereqs
1. Mount data dirs at `/srv/gitea-data` and `/srv/gitea-db` 1. Change the default SSH port for your server to 2022 (or something similar).
2. Create a gitea user and update gitea-compose.yaml with the correct UID 2. Allow SSH to bind to that port: `semanage port -a -t ssh_port_t -p tcp 2022`
3. Allow 2022 on the firewall: `firewall-cmd --add-port=2022/tcp --permanent && firewall-cmd --reload`
4. Mount data dirs at `/srv/gitea-data` and `/srv/gitea-db`
5. Create a gitea user and update gitea-compose.yaml with the correct UID
```bash ```bash
scp active/podman_gitea/gitea-compose.yaml gitea: scp active/container_gitea/gitea-compose.yaml gitea:
docker compose -f gitea-compose.yaml up -d docker compose -f gitea-compose.yaml up -d
``` ```
@@ -72,22 +75,22 @@ exit
```bash ```bash
# Run this in Homelab, not on the server. # Run this in Homelab, not on the server.
mkdir $(pwd)/active/podman_gitea/quadlets mkdir $(pwd)/active/container_gitea/quadlets
# Generate the systemd service # Generate the systemd service
podman run \ podman run \
--network none \ --network none \
--rm \ --rm \
-v $(pwd)/active/podman_gitea/compose:$(pwd)/active/podman_gitea/compose:z \ -v $(pwd)/active/container_gitea/compose:$(pwd)/active/container_gitea/compose:z \
-v $(pwd)/active/podman_gitea/quadlets:$(pwd)/active/podman_gitea/quadlets:z \ -v $(pwd)/active/container_gitea/quadlets:$(pwd)/active/container_gitea/quadlets:z \
quay.io/k9withabone/podlet \ quay.io/k9withabone/podlet \
-f $(pwd)/active/podman_gitea/quadlets \ -f $(pwd)/active/container_gitea/quadlets \
-i \ -i \
--overwrite \ --overwrite \
compose $(pwd)/active/podman_gitea/compose/compose.yaml compose $(pwd)/active/container_gitea/compose/compose.yaml
# Copy the files to the server # Copy the files to the server
scp -r $(pwd)/active/podman_gitea/quadlets/. 3dserver:/home/gitea/.config/containers/systemd/ scp -r $(pwd)/active/container_gitea/quadlets/. 3dserver:/home/gitea/.config/containers/systemd/
``` ```
### Install Quadlets ### Install Quadlets
@@ -127,7 +130,7 @@ systemctl --user enable --now podman-auto-update.timer
```bash ```bash
# Upload quadlets and restart # Upload quadlets and restart
export PODMAN_SERVER=3dserver export PODMAN_SERVER=3dserver
scp -r active/podman_gitea/quadlets/. $PODMAN_SERVER:/home/gitea/.config/containers/systemd/ scp -r active/container_gitea/quadlets/. $PODMAN_SERVER:/home/gitea/.config/containers/systemd/
ssh $PODMAN_SERVER chown -R gitea:gitea /home/gitea/.config/containers/systemd/ ssh $PODMAN_SERVER chown -R gitea:gitea /home/gitea/.config/containers/systemd/
ssh $PODMAN_SERVER ssh $PODMAN_SERVER

View File

@@ -13,14 +13,14 @@ podman run \
--name=grafana \ --name=grafana \
--volume grafana-storage:/var/lib/grafana \ --volume grafana-storage:/var/lib/grafana \
--network=systemd-graphite \ --network=systemd-graphite \
grafana/grafana-enterprise > active/podman_grafana/grafana.container grafana/grafana-enterprise > active/container_grafana/grafana.container
``` ```
Copy the graphite.container and graphite.network file to the server you want to run it on Copy the graphite.container and graphite.network file to the server you want to run it on
```bash ```bash
export PODMAN_SERVER= export PODMAN_SERVER=
scp active/podman_grafana/grafana.container $PODMAN_SERVER:/etc/containers/systemd/ scp active/container_grafana/grafana.container $PODMAN_SERVER:/etc/containers/systemd/
ssh $PODMAN_SERVER systemctl daemon-reload ssh $PODMAN_SERVER systemctl daemon-reload
ssh $PODMAN_SERVER systemctl enable --now grafana.service ssh $PODMAN_SERVER systemctl enable --now grafana.service

View File

@@ -7,7 +7,7 @@
```bash ```bash
# Generate the network # Generate the network
podman run ghcr.io/containers/podlet --description Graphite \ podman run ghcr.io/containers/podlet --description Graphite \
podman network create --ipv6 graphite > active/podman_graphite/graphite.network podman network create --ipv6 graphite > active/container_graphite/graphite.network
# Generate the systemd container service # Generate the systemd container service
podman run ghcr.io/containers/podlet --description Graphite \ podman run ghcr.io/containers/podlet --description Graphite \
@@ -23,15 +23,15 @@ podman run \
-v graphite_configs:/opt/graphite/conf \ -v graphite_configs:/opt/graphite/conf \
-v graphite_data:/opt/graphite/storage \ -v graphite_data:/opt/graphite/storage \
-v graphite_statsd_config:/opt/statsd/config \ -v graphite_statsd_config:/opt/statsd/config \
ghcr.io/deniszh/graphite-statsd > active/podman_graphite/graphite.container ghcr.io/deniszh/graphite-statsd > active/container_graphite/graphite.container
``` ```
Copy the graphite.container and graphite.network file to the server you want to run it on Copy the graphite.container and graphite.network file to the server you want to run it on
```bash ```bash
export PODMAN_SERVER= export PODMAN_SERVER=
scp active/podman_graphite/graphite.network $PODMAN_SERVER:/etc/containers/systemd/ scp active/container_graphite/graphite.network $PODMAN_SERVER:/etc/containers/systemd/
scp active/podman_graphite/graphite.container $PODMAN_SERVER:/etc/containers/systemd/ scp active/container_graphite/graphite.container $PODMAN_SERVER:/etc/containers/systemd/
ssh $PODMAN_SERVER systemctl daemon-reload ssh $PODMAN_SERVER systemctl daemon-reload
ssh $PODMAN_SERVER systemctl start graphite.network ssh $PODMAN_SERVER systemctl start graphite.network

View File

@@ -22,7 +22,7 @@
## Setup immich Project ## Setup immich Project
- [x] Copy and rename this folder to active/podman_immich - [x] Copy and rename this folder to active/container_immich
- [x] Find and replace immich with the name of the service. - [x] Find and replace immich with the name of the service.
- [x] Create the rootless user to run the podman containers - [x] Create the rootless user to run the podman containers
- [ ] Write the compose.yaml spec for your service - [ ] Write the compose.yaml spec for your service
@@ -36,8 +36,8 @@
<https://docs.immich.app/install/docker-compose/> <https://docs.immich.app/install/docker-compose/>
```bash ```bash
scp active/podman_immich/release-compose.yaml immich: scp active/container_immich/release-compose.yaml immich:
scp active/podman_immich/release-env immich:.env scp active/container_immich/release-env immich:.env
mkdir /srv/immich mkdir /srv/immich
docker compose -f release-compose.yaml up -d docker compose -f release-compose.yaml up -d
@@ -62,10 +62,10 @@ mkdir -p /home/immich/.config/containers/systemd
```bash ```bash
# Pull the compose file # Pull the compose file
wget -O active/podman_immich/release-compose.yaml https://github.com/immich-app/immich/releases/latest/download/docker-compose.yml wget -O active/container_immich/release-compose.yaml https://github.com/immich-app/immich/releases/latest/download/docker-compose.yml
# Pull the .env file # Pull the .env file
wget -O active/podman_immich/release-env https://github.com/immich-app/immich/releases/latest/download/example.env wget -O active/container_immich/release-env https://github.com/immich-app/immich/releases/latest/download/example.env
``` ```
2. Edit the compose.yaml. Replace all environment variables with their correct values. 2. Edit the compose.yaml. Replace all environment variables with their correct values.
@@ -84,8 +84,8 @@ Run the following to convert a compose.yaml into the various `.container` files
podman run \ podman run \
--security-opt label=disable \ --security-opt label=disable \
--rm \ --rm \
-v $(pwd)/active/podman_immich/compose:/compose \ -v $(pwd)/active/container_immich/compose:/compose \
-v $(pwd)/active/podman_immich/quadlets:/quadlets \ -v $(pwd)/active/container_immich/quadlets:/quadlets \
quay.io/k9withabone/podlet \ quay.io/k9withabone/podlet \
-f /quadlets \ -f /quadlets \
-i \ -i \
@@ -94,7 +94,7 @@ compose /compose/compose.yaml
# Copy the files to the server # Copy the files to the server
export PODMAN_SERVER=3dserver export PODMAN_SERVER=3dserver
scp -r active/podman_immich/quadlets/. $PODMAN_SERVER:/home/immich/.config/containers/systemd/ scp -r active/container_immich/quadlets/. $PODMAN_SERVER:/home/immich/.config/containers/systemd/
ssh $PODMAN_SERVER chown -R immich:immich /home/immich/.config/containers/systemd/ ssh $PODMAN_SERVER chown -R immich:immich /home/immich/.config/containers/systemd/
``` ```
@@ -122,8 +122,8 @@ systemctl --user enable --now podman-auto-update.timer
### Expose immich ### Expose immich
1. If you need a domain, follow the [DDNS instructions](/active/podman_ddns/ddns.md#install-a-new-ddns-service) 1. If you need a domain, follow the [DDNS instructions](/active/container_ddns/ddns.md#install-a-new-ddns-service)
2. For a web service, follow the [Caddy instructions](/active/podman_caddy/caddy.md#adding-a-new-caddy-record) 2. For a web service, follow the [Caddy instructions](/active/container_caddy/caddy.md#adding-a-new-caddy-record)
3. Finally, follow your OS's guide for opening ports via its firewall service. 3. Finally, follow your OS's guide for opening ports via its firewall service.
#### firewalld #### firewalld

View File

@@ -32,7 +32,7 @@ They have podman rootless instructions!
```bash ```bash
export PODMAN_SERVER=3dserver export PODMAN_SERVER=3dserver
scp -r active/podman_jellyfin/quadlets/. $PODMAN_SERVER:/home/jellyfin/.config/containers/systemd/ scp -r active/container_jellyfin/quadlets/. $PODMAN_SERVER:/home/jellyfin/.config/containers/systemd/
ssh $PODMAN_SERVER chown -R jellyfin:jellyfin /home/jellyfin/.config/containers/systemd/ ssh $PODMAN_SERVER chown -R jellyfin:jellyfin /home/jellyfin/.config/containers/systemd/
ssh $PODMAN_SERVER ssh $PODMAN_SERVER
@@ -50,7 +50,7 @@ They have podman rootless instructions!
```bash ```bash
# Upload quadlets and restart # Upload quadlets and restart
export PODMAN_SERVER=3dserver export PODMAN_SERVER=3dserver
scp -r active/podman_jellyfin/quadlets/. $PODMAN_SERVER:/home/jellyfin/.config/containers/systemd/ scp -r active/container_jellyfin/quadlets/. $PODMAN_SERVER:/home/jellyfin/.config/containers/systemd/
ssh $PODMAN_SERVER chown -R jellyfin:jellyfin /home/jellyfin/.config/containers/systemd/ ssh $PODMAN_SERVER chown -R jellyfin:jellyfin /home/jellyfin/.config/containers/systemd/
ssh $PODMAN_SERVER ssh $PODMAN_SERVER

View File

@@ -21,7 +21,7 @@
## Setup keycloak Project ## Setup keycloak Project
- [ ] Copy and rename this folder to active/podman_keycloak - [ ] Copy and rename this folder to active/container_keycloak
- [ ] Find and replace keycloak with the name of the service. - [ ] Find and replace keycloak with the name of the service.
- [ ] Create the rootless user to run the podman containers - [ ] Create the rootless user to run the podman containers
- [ ] Write the compose.yaml spec for your service - [ ] Write the compose.yaml spec for your service
@@ -78,8 +78,8 @@ Run the following to convert a compose.yaml into the various `.container` files
podman run \ podman run \
--security-opt label=disable \ --security-opt label=disable \
--rm \ --rm \
-v $(pwd)/active/podman_keycloak/:/compose \ -v $(pwd)/active/container_keycloak/:/compose \
-v $(pwd)/active/podman_keycloak/quadlets:/quadlets \ -v $(pwd)/active/container_keycloak/quadlets:/quadlets \
quay.io/k9withabone/podlet \ quay.io/k9withabone/podlet \
-f /quadlets \ -f /quadlets \
-i \ -i \
@@ -88,7 +88,7 @@ compose /compose/compose.yaml
# Copy the files to the server # Copy the files to the server
export PODMAN_SERVER= export PODMAN_SERVER=
scp -r active/podman_keycloak/quadlets/. $PODMAN_SERVER:/home/keycloak/.config/containers/systemd/ scp -r active/container_keycloak/quadlets/. $PODMAN_SERVER:/home/keycloak/.config/containers/systemd/
ssh $PODMAN_SERVER chown -R keycloak:keycloak /home/keycloak/.config/containers/systemd/ ssh $PODMAN_SERVER chown -R keycloak:keycloak /home/keycloak/.config/containers/systemd/
``` ```
@@ -120,8 +120,8 @@ systemctl --user enable --now podman-auto-update.timer
### Expose keycloak ### Expose keycloak
1. If you need a domain, follow the [DDNS instructions](/active/podman_ddns/ddns.md#install-a-new-ddns-service) 1. If you need a domain, follow the [DDNS instructions](/active/container_ddns/ddns.md#install-a-new-ddns-service)
2. For a web service, follow the [Caddy instructions](/active/podman_caddy/caddy.md#adding-a-new-caddy-record) 2. For a web service, follow the [Caddy instructions](/active/container_caddy/caddy.md#adding-a-new-caddy-record)
3. Finally, follow your OS's guide for opening ports via its firewall service. 3. Finally, follow your OS's guide for opening ports via its firewall service.
#### firewalld #### firewalld

View File

@@ -0,0 +1,37 @@
services:
litellm:
image: docker.litellm.ai/berriai/litellm:main-latest
ports:
- 4000:4000
env_file: /home/ai/litellm.env
environment:
DATABASE_URL: "postgresql://llmproxy:dbpassword9090@host.containers.internal:5432/litellm"
STORE_MODEL_IN_DB: "True"
restart: unless-stopped
depends_on:
- litellm-db # Indicates that this service depends on the 'litellm-db' service, ensuring 'litellm-db' starts first
healthcheck: # Defines the health check configuration for the container
test:
- CMD-SHELL
- python3 -c "import urllib.request; urllib.request.urlopen('http://localhost:4000/health/liveliness')" # Command to execute for health check
interval: 30s # Perform health check every 30 seconds
timeout: 10s # Health check command times out after 10 seconds
retries: 3 # Retry up to 3 times if health check fails
start_period: 40s # Wait 40 seconds after container start before beginning health checks
litellm-db:
image: docker.io/postgres:16
restart: always
environment:
POSTGRES_DB: litellm
POSTGRES_USER: llmproxy
POSTGRES_PASSWORD: dbpassword9090
ports:
- "5432:5432"
volumes:
- litellm_postgres_data:/var/lib/postgresql/data:z
healthcheck:
test: ["CMD-SHELL", "pg_isready -d litellm -U llmproxy"]
interval: 1s
timeout: 5s
retries: 10

View File

@@ -0,0 +1,67 @@
# General settings
general_settings:
request_timeout: 600
# Models
model_list:
# Qwen3.5-35B variants
- model_name: qwen3.5-35b-think-general
litellm_params:
model: openai/qwen3.5-35b-a3b
api_base: https://llama-cpp.reeselink.com
api_key: none
temperature: 1.0
top_p: 0.95
presence_penalty: 1.5
extra_body:
top_k: 20
min_p: 0.0
repetition_penalty: 1.0
chat_template_kwargs:
enable_thinking: true
- model_name: qwen3.5-35b-think-code
litellm_params:
model: openai/qwen3.5-35b-a3b
api_base: https://llama-cpp.reeselink.com
api_key: none
temperature: 0.6
top_p: 0.95
presence_penalty: 0.0
extra_body:
top_k: 20
min_p: 0.0
repetition_penalty: 1.0
chat_template_kwargs:
enable_thinking: true
- model_name: qwen3.5-35b-instruct-general
litellm_params:
model: openai/qwen3.5-35b-a3b
api_base: https://llama-cpp.reeselink.com
api_key: none
temperature: 0.7
top_p: 0.8
presence_penalty: 1.5
extra_body:
top_k: 20
min_p: 0.0
repetition_penalty: 1.0
chat_template_kwargs:
enable_thinking: false
- model_name: qwen3.5-35b-instruct-reasoning
litellm_params:
model: openai/qwen3.5-35b-a3b
api_base: https://llama-cpp.reeselink.com
api_key: none
temperature: 1.0
top_p: 0.95
presence_penalty: 1.5
extra_body:
top_k: 20
min_p: 0.0
repetition_penalty: 1.0
chat_template_kwargs:
enable_thinking: false

View File

@@ -0,0 +1,233 @@
# Podman litellm
- [Podman litellm](#podman-litellm)
- [Setup litellm Project](#setup-litellm-project)
- [Install litellm](#install-litellm)
- [Create the ai user](#create-the-ai-user)
- [Write the litellm compose spec](#write-the-litellm-compose-spec)
- [A Note on Volumes](#a-note-on-volumes)
- [Convert litellm compose spec to quadlets](#convert-litellm-compose-spec-to-quadlets)
- [Create the litellm.env file](#create-the-litellmenv-file)
- [Start and enable your systemd quadlet](#start-and-enable-your-systemd-quadlet)
- [Expose litellm](#expose-litellm)
- [Using LiteLLM](#using-litellm)
- [Adding Models](#adding-models)
- [Testing Models](#testing-models)
- [Backup litellm](#backup-litellm)
- [Upgrade litellm](#upgrade-litellm)
- [Upgrade Quadlets](#upgrade-quadlets)
- [Uninstall](#uninstall)
- [Notes](#notes)
- [SELinux](#selinux)
## Setup litellm Project
- [ ] Copy and rename this folder to active/container_litellm
- [ ] Find and replace litellm with the name of the service.
- [ ] Create the rootless user to run the podman containers
- [ ] Write the compose.yaml spec for your service
- [ ] Convert the compose.yaml spec to a quadlet
- [ ] Install the quadlet on the podman server
- [ ] Expose the quadlet service
- [ ] Install a backup service and timer
## Install litellm
### Create the ai user
```bash
# SSH into your podman server as root
useradd ai
loginctl enable-linger $(id -u ai)
systemctl --user --machine=ai@.host enable podman-restart
systemctl --user --machine=ai@.host enable --now podman.socket
mkdir -p /home/ai/.config/containers/systemd
```
### Write the litellm compose spec
See the [docker run command here](https://docs.litellm.ai/docs/proxy/docker_quick_start#32-start-proxy)
Edit the compose.yaml at active/container_litellm/compose/compose.yaml
#### A Note on Volumes
Named volumes are stored at `/home/litellm/.local/share/containers/storage/volumes/`.
### Convert litellm compose spec to quadlets
Run the following to convert a compose.yaml into the various `.container` files for systemd:
```bash
# Generate the systemd service
podman run \
--security-opt label=disable \
--rm \
-v $(pwd)/active/container_litellm/compose:/compose \
-v $(pwd)/active/container_litellm/quadlets:/quadlets \
quay.io/k9withabone/podlet \
-f /quadlets \
-i \
--overwrite \
compose /compose/compose.yaml
# Copy the files to the server
export PODMAN_SERVER=ai-ai
scp -r active/container_litellm/quadlets/. $PODMAN_SERVER:/home/ai/.config/containers/systemd/
```
### Create the litellm.env file
Should look something like:
```env
LITELLM_MASTER_KEY="random-string"
LITELLM_SALT_KEY="random-string"
UI_USERNAME="admin"
UI_PASSWORD="random-string"
```
Then copy it to the server
```bash
export PODMAN_SERVER=ai
scp -r active/container_litellm/config.yaml $PODMAN_SERVER:/home/ai/litellm_config.yaml
ssh $PODMAN_SERVER chown -R ai:ai /home/ai/litellm_config.yaml
```
### Start and enable your systemd quadlet
SSH into your podman server as root:
```bash
ssh ai
machinectl shell ai@
systemctl --user daemon-reload
systemctl --user restart litellm
journalctl --user -u litellm -f
# Enable auto-update service which will pull new container images automatically every day
systemctl --user enable --now podman-auto-update.timer
```
### Expose litellm
1. If you need a domain, follow the [DDNS instructions](/active/container_ddns/ddns.md#install-a-new-ddns-service)
2. For a web service, follow the [Caddy instructions](/active/container_caddy/caddy.md#adding-a-new-caddy-record)
3. Finally, follow your OS's guide for opening ports via its firewall service.
## Using LiteLLM
### Adding Models
```json
// qwen3.5-35b-a3b-thinking
{
"temperature": 1,
"top_p": 0.95,
"presence_penalty": 1.5,
"extra_body": {
"top_k": 20,
"min_p": 0,
"repetition_penalty": 1,
"chat_template_kwargs": {
"enable_thinking": true
}
}
}
// qwen3.5-35b-a3b-coding
{
"temperature": 0.6,
"top_p": 0.95,
"presence_penalty": 0,
"extra_body": {
"top_k": 20,
"min_p": 0,
"repetition_penalty": 1,
"chat_template_kwargs": {
"enable_thinking": true
}
}
}
// qwen3.5-35b-a3b-instruct
{
"temperature": 0.7,
"top_p": 0.8,
"presence_penalty": 1.5,
"extra_body": {
"top_k": 20,
"min_p": 0,
"repetition_penalty": 1,
"chat_template_kwargs": {
"enable_thinking": false
}
}
}
```
### Testing Models
```bash
# List models
curl -L -X GET 'https://aipi.reeseapps.com/v1/models' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer sk-1234'
curl -L -X POST 'https://aipi.reeseapps.com/v1/chat/completions' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer sk-1234' \
-d '{
"model": "gpt-4o-mini", # 👈 REPLACE with 'public model name' for any db-model
"messages": [
{
"content": "Hey, how's it going",
"role": "user"
}
],
}'
```
## Backup litellm
Follow the [Borg Backup instructions](/active/systemd_borg/borg.md#set-up-a-client-for-backup)
## Upgrade litellm
### Upgrade Quadlets
Upgrades should be a repeat of [writing the compose spec](#convert-litellm-compose-spec-to-quadlets) and [installing the quadlets](#start-and-enable-your-systemd-quadlet)
```bash
export PODMAN_SERVER=
scp -r quadlets/. $PODMAN_SERVER$:/home/litellm/.config/containers/systemd/
ssh litellm systemctl --user daemon-reload
ssh litellm systemctl --user restart litellm
```
## Uninstall
```bash
# Stop the user's services
systemctl --user disable podman-restart
podman container stop --all
systemctl --user disable --now podman.socket
systemctl --user disable --now podman-auto-update.timer
# Delete the user (this won't delete their home directory)
# userdel might spit out an error like:
# userdel: user litellm is currently used by process 591255
# kill those processes and try again
userdel litellm
```
## Notes
### SELinux
<https://blog.christophersmart.com/2021/01/31/podman-volumes-and-selinux/>
:z allows a container to share a mounted volume with all other containers.
:Z allows a container to reserve a mounted volume and prevents any other container from accessing.

View File

@@ -0,0 +1,15 @@
[Container]
Environment=POSTGRES_DB=litellm POSTGRES_USER=llmproxy POSTGRES_PASSWORD=dbpassword9090
HealthCmd='pg_isready -d litellm -U llmproxy'
HealthInterval=1s
HealthRetries=10
HealthTimeout=5s
Image=docker.io/postgres:16
PublishPort=5432:5432
Volume=litellm_postgres_data:/var/lib/postgresql/data:z
[Service]
Restart=always
[Install]
WantedBy=default.target

View File

@@ -0,0 +1,19 @@
[Unit]
Requires=litellm-db.service
[Container]
Environment=DATABASE_URL=postgresql://llmproxy:dbpassword9090@host.containers.internal:5432/litellm STORE_MODEL_IN_DB=True
EnvironmentFile=/home/ai/litellm.env
HealthCmd="python3 -c \"import urllib.request; urllib.request.urlopen('http://localhost:4000/health/liveliness')\""
HealthInterval=30s
HealthRetries=3
HealthStartPeriod=40s
HealthTimeout=10s
Image=docker.litellm.ai/berriai/litellm:main-latest
PublishPort=4000:4000
[Service]
Restart=always
[Install]
WantedBy=default.target

View File

@@ -0,0 +1,3 @@
# Compose
Put your compose.yaml here.

View File

@@ -18,7 +18,7 @@
## Setup matrix Project ## Setup matrix Project
- [x] Copy and rename this folder to active/podman_matrix - [x] Copy and rename this folder to active/container_matrix
- [x] Find and replace matrix with the name of the service. - [x] Find and replace matrix with the name of the service.
- [x] Create the rootless user to run the podman containers - [x] Create the rootless user to run the podman containers
- [x] Write the compose.yaml spec for your service - [x] Write the compose.yaml spec for your service
@@ -57,8 +57,8 @@ On your local machine:
podman run \ podman run \
--security-opt label=disable \ --security-opt label=disable \
--rm \ --rm \
-v $(pwd)/active/podman_matrix/compose:/compose \ -v $(pwd)/active/container_matrix/compose:/compose \
-v $(pwd)/active/podman_matrix/quadlets:/quadlets \ -v $(pwd)/active/container_matrix/quadlets:/quadlets \
quay.io/k9withabone/podlet \ quay.io/k9withabone/podlet \
-f /quadlets \ -f /quadlets \
-i \ -i \
@@ -66,10 +66,10 @@ quay.io/k9withabone/podlet \
compose /compose/compose.yaml compose /compose/compose.yaml
# Copy the files to the server # Copy the files to the server
scp -r active/podman_matrix/quadlets/. matrix:~/.config/containers/systemd/ scp -r active/container_matrix/quadlets/. matrix:~/.config/containers/systemd/
# Copy the compose files to the server # Copy the compose files to the server
scp -r active/podman_matrix/compose/. matrix:~/.config// scp -r active/container_matrix/compose/. matrix:~/.config//
``` ```
```bash ```bash
@@ -96,8 +96,8 @@ ghcr.io/matrix-construct/tuwunel:latest \
### Expose matrix ### Expose matrix
1. If you need a domain, follow the [DDNS instructions](/active/podman_ddns/ddns.md#install-a-new-ddns-service) 1. If you need a domain, follow the [DDNS instructions](/active/container_ddns/ddns.md#install-a-new-ddns-service)
2. For a web service, follow the [Caddy instructions](/active/podman_caddy/caddy.md#adding-a-new-caddy-record) 2. For a web service, follow the [Caddy instructions](/active/container_caddy/caddy.md#adding-a-new-caddy-record)
3. Finally, follow your OS's guide for opening ports via its firewall service. 3. Finally, follow your OS's guide for opening ports via its firewall service.
#### firewalld #### firewalld

View File

@@ -34,8 +34,8 @@ podman run \
--security-opt label=disable \ --security-opt label=disable \
--userns keep-id \ --userns keep-id \
--rm \ --rm \
-v $(pwd)/active/podman_minecraft:/compose \ -v $(pwd)/active/container_minecraft:/compose \
-v $(pwd)/active/podman_minecraft/quadlets:/quadlets \ -v $(pwd)/active/container_minecraft/quadlets:/quadlets \
quay.io/k9withabone/podlet \ quay.io/k9withabone/podlet \
-f /quadlets \ -f /quadlets \
-i \ -i \
@@ -43,7 +43,7 @@ quay.io/k9withabone/podlet \
compose /compose/compose.yaml compose /compose/compose.yaml
# Copy the files to the server # Copy the files to the server
scp -r active/podman_minecraft/quadlets/. minecraft:~/.config/containers/systemd/ scp -r active/container_minecraft/quadlets/. minecraft:~/.config/containers/systemd/
``` ```
### Install Quadlets ### Install Quadlets
@@ -78,10 +78,10 @@ ssh minecraft systemctl --user restart minecraft
## Expose minecraft ## Expose minecraft
1. Create your minecraft ddns record first [following these docs](/active/podman_ddns/ddns.md#) 1. Create your minecraft ddns record first [following these docs](/active/container_ddns/ddns.md#)
2. Create a SRV record in your DNS provider like the following: 2. Create a SRV record in your DNS provider like the following:
active/podman_minecraft/secrets/reeseapps_records.json: active/container_minecraft/secrets/reeseapps_records.json:
```json ```json
{ {
@@ -105,7 +105,7 @@ ssh minecraft systemctl --user restart minecraft
``` ```
```bash ```bash
aws route53 change-resource-record-sets --hosted-zone-id $(cat active/aws_route53/secrets/reeseapps-zoneid) --change-batch file://active/podman_minecraft/secrets/reeseapps_records.json aws route53 change-resource-record-sets --hosted-zone-id $(cat active/aws_route53/secrets/reeseapps-zoneid) --change-batch file://active/container_minecraft/secrets/reeseapps_records.json
``` ```
3. Test your record with `nslookup` 3. Test your record with `nslookup`

View File

@@ -36,7 +36,7 @@
1. Create Fedora VM 1. Create Fedora VM
2. [Install Docker](https://docs.docker.com/engine/install/fedora/) 2. [Install Docker](https://docs.docker.com/engine/install/fedora/)
3. Create and mount a directory at `/srv/nextcloud-data` 3. Create and mount a directory at `/srv/nextcloud-data`
4. `scp active/podman_nextcloud/nextcloud-compose.yaml nextcloud:` 4. `scp active/container_nextcloud/nextcloud-compose.yaml nextcloud:`
5. `docker compose -f nextcloud-compose.yaml up -d` 5. `docker compose -f nextcloud-compose.yaml up -d`
## Install with Rootless Podman ## Install with Rootless Podman
@@ -87,7 +87,7 @@ On the operator
```bash ```bash
# Copy the quadlet files # Copy the quadlet files
scp \ scp \
active/podman_nextcloud/nextcloud-aio-mastercontainer.container \ active/container_nextcloud/nextcloud-aio-mastercontainer.container \
3dserver:/home/nextcloud/.config/containers/systemd/ 3dserver:/home/nextcloud/.config/containers/systemd/
ssh chown -R nextcloud:nextcloud /home/nextcloud/.config/containers/systemd/ ssh chown -R nextcloud:nextcloud /home/nextcloud/.config/containers/systemd/

View File

@@ -0,0 +1,81 @@
# Ngnix
## Initial Install
Create your initial `secrets/nginx.conf` to look something like:
```conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
stream {
log_format stream_logs '$remote_addr [$time_local] $protocol $status $bytes_sent $bytes_received $session_time "$upstream_addr"';
access_log /dev/stdout stream_logs;
error_log stderr info;
server {
listen 25565;
listen [::]:25565;
proxy_pass my-minecraft-server.internal.dns:25565;
}
}
```
Create the systemd service:
```bash
# Get the initial configuration
vim /etc/containers/systemd/nginx.container
```
```conf
[Unit]
Description=Nginx
[Container]
AddCapability=NET_ADMIN
ContainerName=nginx
Image=docker.io/nginx
Network=host
SecurityLabelDisable=true
Volume=/etc/nginx:/etc/nginx
[Service]
Restart=always
[Install]
WantedBy=default.target
```
Reload the service and start it:
```bash
systemctl daemon-reload
systemctl start nginx
```
## Update the Configuration
```bash
scp active/container_nginx/secrets/nginx.conf proxy:/etc/nginx/nginx.conf
ssh proxy
systemctl restart nginx
```
## Logs
```bash
# Watch client connections
journalctl -u nginx -f | grep -e 'client .* connected'
# Watch upstream proxy connections
journalctl -u nginx -f | grep -e 'proxy .* connected'
```

View File

@@ -0,0 +1,3 @@
# Compose
Put your compose.yaml here.

View File

@@ -0,0 +1,11 @@
services:
rabbitmq:
container_name: rabbitmq
restart: always
image: docker.io/rabbitmq:3-management
ports:
- 15672:15672
- 5672:5672
env_file: /home/rabbitmq/rabbitmq.env
volumes:
- /home/rabbitmq/data:/var/lib/rabbitmq

Some files were not shown because too many files have changed in this diff Show More