Compare commits
25 Commits
f3c313e610
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
acf6421b53
|
|||
|
875795a409
|
|||
|
b9d1c2a9a3
|
|||
|
6f8b7ffca6
|
|||
|
cc75227a77
|
|||
|
9ae82fc3de
|
|||
|
92edf49948
|
|||
|
25d3a7805c
|
|||
|
eb67191706
|
|||
|
d51560f979
|
|||
|
88ecb458e1
|
|||
|
31739320aa
|
|||
|
f70028cf63
|
|||
|
ecf4ae1058
|
|||
|
eff2aa4066
|
|||
|
a53e67653d
|
|||
|
d48b9a66cb
|
|||
|
2c5af8507c
|
|||
|
ba66c47719
|
|||
|
da0b06768e
|
|||
|
1c6e1b7032
|
|||
|
087d8888cf
|
|||
|
cb486ae289
|
|||
|
cd56318ab0
|
|||
|
416321206d
|
208
active/container_bifrost/bifrost.md
Normal file
208
active/container_bifrost/bifrost.md
Normal file
@@ -0,0 +1,208 @@
|
|||||||
|
# Podman bifrost
|
||||||
|
|
||||||
|
- [Podman bifrost](#podman-bifrost)
|
||||||
|
- [Setup bifrost Project](#setup-bifrost-project)
|
||||||
|
- [Install bifrost](#install-bifrost)
|
||||||
|
- [Create the ai user](#create-the-ai-user)
|
||||||
|
- [Write the bifrost compose spec](#write-the-bifrost-compose-spec)
|
||||||
|
- [A Note on Volumes](#a-note-on-volumes)
|
||||||
|
- [Convert bifrost compose spec to quadlets](#convert-bifrost-compose-spec-to-quadlets)
|
||||||
|
- [Start and enable your systemd quadlet](#start-and-enable-your-systemd-quadlet)
|
||||||
|
- [Expose bifrost](#expose-bifrost)
|
||||||
|
- [Using bifrost](#using-bifrost)
|
||||||
|
- [Adding Models](#adding-models)
|
||||||
|
- [Testing Models](#testing-models)
|
||||||
|
- [Backup bifrost](#backup-bifrost)
|
||||||
|
- [Upgrade bifrost](#upgrade-bifrost)
|
||||||
|
- [Upgrade Quadlets](#upgrade-quadlets)
|
||||||
|
- [Uninstall](#uninstall)
|
||||||
|
- [Notes](#notes)
|
||||||
|
- [SELinux](#selinux)
|
||||||
|
|
||||||
|
## Setup bifrost Project
|
||||||
|
|
||||||
|
- [ ] Copy and rename this folder to active/container_bifrost
|
||||||
|
- [ ] Find and replace bifrost with the name of the service.
|
||||||
|
- [ ] Create the rootless user to run the podman containers
|
||||||
|
- [ ] Write the compose.yaml spec for your service
|
||||||
|
- [ ] Convert the compose.yaml spec to a quadlet
|
||||||
|
- [ ] Install the quadlet on the podman server
|
||||||
|
- [ ] Expose the quadlet service
|
||||||
|
- [ ] Install a backup service and timer
|
||||||
|
|
||||||
|
## Install bifrost
|
||||||
|
|
||||||
|
### Create the ai user
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# SSH into your podman server as root
|
||||||
|
useradd ai
|
||||||
|
loginctl enable-linger $(id -u ai)
|
||||||
|
systemctl --user --machine=ai@.host enable podman-restart
|
||||||
|
systemctl --user --machine=ai@.host enable --now podman.socket
|
||||||
|
mkdir -p /home/ai/.config/containers/systemd
|
||||||
|
```
|
||||||
|
|
||||||
|
### Write the bifrost compose spec
|
||||||
|
|
||||||
|
Edit the compose.yaml at active/container_bifrost/compose/compose.yaml
|
||||||
|
|
||||||
|
#### A Note on Volumes
|
||||||
|
|
||||||
|
Named volumes are stored at `/home/bifrost/.local/share/containers/storage/volumes/`.
|
||||||
|
|
||||||
|
### Convert bifrost compose spec to quadlets
|
||||||
|
|
||||||
|
Run the following to convert a compose.yaml into the various `.container` files for systemd:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate the systemd service
|
||||||
|
podman run \
|
||||||
|
--security-opt label=disable \
|
||||||
|
--rm \
|
||||||
|
-v $(pwd)/active/container_bifrost/compose:/compose \
|
||||||
|
-v $(pwd)/active/container_bifrost/quadlets:/quadlets \
|
||||||
|
quay.io/k9withabone/podlet \
|
||||||
|
-f /quadlets \
|
||||||
|
-i \
|
||||||
|
--overwrite \
|
||||||
|
compose /compose/compose.yaml
|
||||||
|
|
||||||
|
# Copy the files to the server
|
||||||
|
export PODMAN_SERVER=ai-ai
|
||||||
|
scp -r active/container_bifrost/quadlets/. $PODMAN_SERVER:/home/ai/.config/containers/systemd/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Start and enable your systemd quadlet
|
||||||
|
|
||||||
|
SSH into your podman server as root:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
systemctl --user daemon-reload
|
||||||
|
systemctl --user restart bifrost
|
||||||
|
journalctl --user -u bifrost -f
|
||||||
|
# Enable auto-update service which will pull new container images automatically every day
|
||||||
|
systemctl --user enable --now podman-auto-update.timer
|
||||||
|
```
|
||||||
|
|
||||||
|
### Expose bifrost
|
||||||
|
|
||||||
|
1. If you need a domain, follow the [DDNS instructions](/active/container_ddns/ddns.md#install-a-new-ddns-service)
|
||||||
|
2. For a web service, follow the [Caddy instructions](/active/container_caddy/caddy.md#adding-a-new-caddy-record)
|
||||||
|
3. Finally, follow your OS's guide for opening ports via its firewall service.
|
||||||
|
|
||||||
|
## Using bifrost
|
||||||
|
|
||||||
|
### Adding Models
|
||||||
|
|
||||||
|
```json
|
||||||
|
// qwen3.5-35b-a3b-thinking
|
||||||
|
{
|
||||||
|
"temperature": 1,
|
||||||
|
"top_p": 0.95,
|
||||||
|
"presence_penalty": 1.5,
|
||||||
|
"extra_body": {
|
||||||
|
"top_k": 20,
|
||||||
|
"min_p": 0,
|
||||||
|
"repetition_penalty": 1,
|
||||||
|
"chat_template_kwargs": {
|
||||||
|
"enable_thinking": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// qwen3.5-35b-a3b-coding
|
||||||
|
{
|
||||||
|
"temperature": 0.6,
|
||||||
|
"top_p": 0.95,
|
||||||
|
"presence_penalty": 0,
|
||||||
|
"extra_body": {
|
||||||
|
"top_k": 20,
|
||||||
|
"min_p": 0,
|
||||||
|
"repetition_penalty": 1,
|
||||||
|
"chat_template_kwargs": {
|
||||||
|
"enable_thinking": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// qwen3.5-35b-a3b-instruct
|
||||||
|
{
|
||||||
|
"temperature": 0.7,
|
||||||
|
"top_p": 0.8,
|
||||||
|
"presence_penalty": 1.5,
|
||||||
|
"extra_body": {
|
||||||
|
"top_k": 20,
|
||||||
|
"min_p": 0,
|
||||||
|
"repetition_penalty": 1,
|
||||||
|
"chat_template_kwargs": {
|
||||||
|
"enable_thinking": false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Testing Models
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List models
|
||||||
|
curl -L -X GET 'https://aipi.reeseapps.com/v1/models' \
|
||||||
|
-H 'Content-Type: application/json' \
|
||||||
|
-H 'Authorization: Bearer sk-1234'
|
||||||
|
|
||||||
|
curl -L -X POST 'https://aipi.reeseapps.com/v1/chat/completions' \
|
||||||
|
-H 'Content-Type: application/json' \
|
||||||
|
-H 'Authorization: Bearer sk-1234' \
|
||||||
|
-d '{
|
||||||
|
"model": "gpt-4o-mini", # 👈 REPLACE with 'public model name' for any db-model
|
||||||
|
"messages": [
|
||||||
|
{
|
||||||
|
"content": "Hey, how's it going",
|
||||||
|
"role": "user"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
## Backup bifrost
|
||||||
|
|
||||||
|
Follow the [Borg Backup instructions](/active/systemd_borg/borg.md#set-up-a-client-for-backup)
|
||||||
|
|
||||||
|
## Upgrade bifrost
|
||||||
|
|
||||||
|
### Upgrade Quadlets
|
||||||
|
|
||||||
|
Upgrades should be a repeat of [writing the compose spec](#convert-bifrost-compose-spec-to-quadlets) and [installing the quadlets](#start-and-enable-your-systemd-quadlet)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export PODMAN_SERVER=
|
||||||
|
scp -r quadlets/. $PODMAN_SERVER$:/home/bifrost/.config/containers/systemd/
|
||||||
|
ssh bifrost systemctl --user daemon-reload
|
||||||
|
ssh bifrost systemctl --user restart bifrost
|
||||||
|
```
|
||||||
|
|
||||||
|
## Uninstall
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Stop the user's services
|
||||||
|
systemctl --user disable podman-restart
|
||||||
|
podman container stop --all
|
||||||
|
systemctl --user disable --now podman.socket
|
||||||
|
systemctl --user disable --now podman-auto-update.timer
|
||||||
|
|
||||||
|
# Delete the user (this won't delete their home directory)
|
||||||
|
# userdel might spit out an error like:
|
||||||
|
# userdel: user bifrost is currently used by process 591255
|
||||||
|
# kill those processes and try again
|
||||||
|
userdel bifrost
|
||||||
|
```
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
### SELinux
|
||||||
|
|
||||||
|
<https://blog.christophersmart.com/2021/01/31/podman-volumes-and-selinux/>
|
||||||
|
|
||||||
|
:z allows a container to share a mounted volume with all other containers.
|
||||||
|
|
||||||
|
:Z allows a container to reserve a mounted volume and prevents any other container from accessing.
|
||||||
32
active/container_bifrost/compose/compose.yaml
Normal file
32
active/container_bifrost/compose/compose.yaml
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
services:
|
||||||
|
bifrost:
|
||||||
|
image: docker.io/maximhq/bifrost:latest
|
||||||
|
container_name: bifrost
|
||||||
|
ports:
|
||||||
|
- "8000:8000"
|
||||||
|
volumes:
|
||||||
|
- bifrost-data:/app/data
|
||||||
|
environment:
|
||||||
|
- APP_PORT=8000
|
||||||
|
- APP_HOST=0.0.0.0
|
||||||
|
- LOG_LEVEL=info
|
||||||
|
- LOG_STYLE=json
|
||||||
|
ulimits:
|
||||||
|
nofile:
|
||||||
|
soft: 65536
|
||||||
|
hard: 65536
|
||||||
|
healthcheck:
|
||||||
|
test:
|
||||||
|
[
|
||||||
|
"CMD",
|
||||||
|
"wget",
|
||||||
|
"--no-verbose",
|
||||||
|
"--tries=1",
|
||||||
|
"-O",
|
||||||
|
"/dev/null",
|
||||||
|
"http://localhost:8080/health",
|
||||||
|
]
|
||||||
|
interval: 30s
|
||||||
|
timeout: 10s
|
||||||
|
retries: 3
|
||||||
|
restart: unless-stopped
|
||||||
17
active/container_bifrost/quadlets/bifrost.container
Normal file
17
active/container_bifrost/quadlets/bifrost.container
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
[Container]
|
||||||
|
ContainerName=bifrost
|
||||||
|
Environment=APP_PORT=8000 APP_HOST=0.0.0.0 LOG_LEVEL=info LOG_STYLE=json
|
||||||
|
HealthCmd=["wget", "--no-verbose", "--tries=1", "-O", "/dev/null", "http://localhost:8080/health"]
|
||||||
|
HealthInterval=30s
|
||||||
|
HealthRetries=3
|
||||||
|
HealthTimeout=10s
|
||||||
|
Image=docker.io/maximhq/bifrost:latest
|
||||||
|
PublishPort=8000:8000
|
||||||
|
Ulimit=nofile=65536:65536
|
||||||
|
Volume=bifrost-data:/app/data
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Restart=always
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=default.target
|
||||||
@@ -1,7 +1,9 @@
|
|||||||
FROM docker.io/caddy:2-builder AS builder
|
FROM docker.io/caddy:2-builder AS builder
|
||||||
|
|
||||||
RUN xcaddy build \
|
RUN xcaddy build \
|
||||||
--with github.com/caddy-dns/route53@v1.6.0
|
--with github.com/caddy-dns/route53@v1.6.0 \
|
||||||
|
--with github.com/fabriziosalmi/caddy-waf
|
||||||
|
|
||||||
|
|
||||||
FROM docker.io/caddy:2
|
FROM docker.io/caddy:2
|
||||||
|
|
||||||
|
|||||||
@@ -6,6 +6,8 @@
|
|||||||
- [Ansible](#ansible)
|
- [Ansible](#ansible)
|
||||||
- [Manual](#manual)
|
- [Manual](#manual)
|
||||||
- [Adding a new Caddy Record](#adding-a-new-caddy-record)
|
- [Adding a new Caddy Record](#adding-a-new-caddy-record)
|
||||||
|
- [Logs](#logs)
|
||||||
|
- [Caddy WAF](#caddy-waf)
|
||||||
|
|
||||||
## Custom Caddy Image
|
## Custom Caddy Image
|
||||||
|
|
||||||
@@ -68,6 +70,11 @@ active/container_caddy/install_caddy_proxy.yaml
|
|||||||
ansible-playbook \
|
ansible-playbook \
|
||||||
-i ansible/inventory.yaml \
|
-i ansible/inventory.yaml \
|
||||||
active/container_caddy/install_caddy_deskwork.yaml
|
active/container_caddy/install_caddy_deskwork.yaml
|
||||||
|
|
||||||
|
# Toybox (AI) Proxy
|
||||||
|
ansible-playbook \
|
||||||
|
-i ansible/inventory.yaml \
|
||||||
|
active/container_caddy/install_caddy_toybox.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
See ansible playbook [install_caddy.yaml](/active/container_caddy/install_caddy.yaml)
|
See ansible playbook [install_caddy.yaml](/active/container_caddy/install_caddy.yaml)
|
||||||
@@ -138,3 +145,66 @@ ddns service:
|
|||||||
1. Update the [ddns caddy records](/active/container_ddns/secrets/caddy_records.yaml)
|
1. Update the [ddns caddy records](/active/container_ddns/secrets/caddy_records.yaml)
|
||||||
2. (Optional) Update the Caddyfile at `active/container_caddy/secrets/Caddyfile`
|
2. (Optional) Update the Caddyfile at `active/container_caddy/secrets/Caddyfile`
|
||||||
3. Run the [caddy ansible playbook](/active/container_caddy/caddy.md#install-caddy)
|
3. Run the [caddy ansible playbook](/active/container_caddy/caddy.md#install-caddy)
|
||||||
|
|
||||||
|
## Logs
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Follow remote connections
|
||||||
|
podman logs -f caddy | grep -e '^{' | jq -c '.request | {remote_ip,host}'
|
||||||
|
|
||||||
|
# Filter out noisy hosts
|
||||||
|
podman logs -f caddy | grep -e '^{' | jq -c '.request | {remote_ip,host} | select(.host != "gitea.reeseapps.com")'
|
||||||
|
|
||||||
|
# Focus on user agents
|
||||||
|
podman logs -f caddy | grep -e '^{' | jq -c '
|
||||||
|
{
|
||||||
|
"User-Agent": .request.headers["User-Agent"],
|
||||||
|
remote_ip: .request.remote_ip,
|
||||||
|
host: .request.host,
|
||||||
|
status: .status
|
||||||
|
}
|
||||||
|
'
|
||||||
|
```
|
||||||
|
|
||||||
|
## Caddy WAF
|
||||||
|
|
||||||
|
<https://github.com/fabriziosalmi/caddy-waf>
|
||||||
|
|
||||||
|
1. Copy the rules.json to `/etc/caddy/rules.json`
|
||||||
|
2. Update the Caddyfile to something like this:
|
||||||
|
|
||||||
|
```Caddyfile
|
||||||
|
gitea.reeseapps.com:443 {
|
||||||
|
log {
|
||||||
|
output stdout
|
||||||
|
format json {
|
||||||
|
message_key msg # Key for the log message
|
||||||
|
level_key severity # Key for the log level
|
||||||
|
time_key timestamp # Key for the timestamp
|
||||||
|
name_key logger # Key for the logger name
|
||||||
|
caller_key function # Key for the caller information
|
||||||
|
stacktrace_key stack # Key for error stacktraces
|
||||||
|
time_format "2006-01-02 15:04:05 MST" # RFC3339-like format
|
||||||
|
time_local # Use local timezone
|
||||||
|
duration_format "ms" # Show durations in milliseconds
|
||||||
|
level_format "upper" # Uppercase log levels
|
||||||
|
}
|
||||||
|
}
|
||||||
|
route {
|
||||||
|
waf {
|
||||||
|
metrics_endpoint /waf_metrics
|
||||||
|
rule_file rules.json
|
||||||
|
}
|
||||||
|
|
||||||
|
@wafmetrics {
|
||||||
|
path /waf_metrics
|
||||||
|
}
|
||||||
|
|
||||||
|
handle @wafmetrics { } # empty → let the WAF serve the metrics
|
||||||
|
|
||||||
|
handle {
|
||||||
|
reverse_proxy gitea.reeselink.com:3000
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|||||||
@@ -1,38 +1,45 @@
|
|||||||
- name: Create Caddy Proxy
|
- name: Create Caddy Proxy
|
||||||
hosts: caddy
|
hosts: caddy
|
||||||
tasks:
|
tasks:
|
||||||
- name: Copy Containerfile for build
|
- name: Copy Containerfile for build
|
||||||
template:
|
template:
|
||||||
src: Containerfile
|
src: Containerfile
|
||||||
dest: /etc/caddy/Containerfile
|
dest: /etc/caddy/Containerfile
|
||||||
owner: root
|
owner: root
|
||||||
group: root
|
group: root
|
||||||
mode: '0644'
|
mode: "0644"
|
||||||
- name: Build Caddy Image
|
- name: Build Caddy Image
|
||||||
shell:
|
shell:
|
||||||
cmd: podman build -t gitea.reeseapps.com/services/caddy:latest -f /etc/caddy/Containerfile
|
cmd: podman build -t gitea.reeseapps.com/services/caddy:latest -f /etc/caddy/Containerfile
|
||||||
- name: Create /etc/caddy dir
|
- name: Create /etc/caddy dir
|
||||||
ansible.builtin.file:
|
ansible.builtin.file:
|
||||||
path: /etc/caddy
|
path: /etc/caddy
|
||||||
state: directory
|
state: directory
|
||||||
mode: '0755'
|
mode: "0755"
|
||||||
- name: Copy Caddyfile
|
- name: Copy Caddyfile
|
||||||
template:
|
template:
|
||||||
src: secrets/proxy.Caddyfile
|
src: secrets/proxy.Caddyfile
|
||||||
dest: /etc/caddy/Caddyfile
|
dest: /etc/caddy/Caddyfile
|
||||||
owner: root
|
owner: root
|
||||||
group: root
|
group: root
|
||||||
mode: '0644'
|
mode: "0644"
|
||||||
- name: Template Caddy Container Services
|
- name: Copy rules.json
|
||||||
template:
|
template:
|
||||||
src: caddy.container
|
src: rules.json
|
||||||
dest: /etc/containers/systemd/caddy.container
|
dest: /etc/caddy/rules.json
|
||||||
owner: root
|
owner: root
|
||||||
group: root
|
group: root
|
||||||
mode: '0644'
|
mode: "0644"
|
||||||
- name: Reload and start the Caddy service
|
- name: Template Caddy Container Services
|
||||||
ansible.builtin.systemd_service:
|
template:
|
||||||
state: restarted
|
src: caddy.container
|
||||||
name: caddy.service
|
dest: /etc/containers/systemd/caddy.container
|
||||||
enabled: true
|
owner: root
|
||||||
daemon_reload: true
|
group: root
|
||||||
|
mode: "0644"
|
||||||
|
- name: Reload and start the Caddy service
|
||||||
|
ansible.builtin.systemd_service:
|
||||||
|
state: restarted
|
||||||
|
name: caddy.service
|
||||||
|
enabled: true
|
||||||
|
daemon_reload: true
|
||||||
|
|||||||
28
active/container_caddy/install_caddy_toybox.yaml
Normal file
28
active/container_caddy/install_caddy_toybox.yaml
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
- name: Create Caddy Proxy
|
||||||
|
hosts: toybox-root
|
||||||
|
tasks:
|
||||||
|
- name: Create /etc/caddy dir
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: /etc/caddy
|
||||||
|
state: directory
|
||||||
|
mode: "0755"
|
||||||
|
- name: Copy Caddyfile
|
||||||
|
template:
|
||||||
|
src: secrets/toybox.Caddyfile
|
||||||
|
dest: /etc/caddy/Caddyfile
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: "0644"
|
||||||
|
- name: Template Caddy Container Services
|
||||||
|
template:
|
||||||
|
src: caddy.container
|
||||||
|
dest: /etc/containers/systemd/caddy.container
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: "0644"
|
||||||
|
- name: Reload and start the Caddy service
|
||||||
|
ansible.builtin.systemd_service:
|
||||||
|
state: restarted
|
||||||
|
name: caddy.service
|
||||||
|
enabled: true
|
||||||
|
daemon_reload: true
|
||||||
26
active/container_caddy/rules.json
Normal file
26
active/container_caddy/rules.json
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
[
|
||||||
|
{
|
||||||
|
"id": "block-scanners",
|
||||||
|
"phase": 1,
|
||||||
|
"pattern": "(?i)(nikto|sqlmap|nmap|acunetix|nessus|openvas|wpscan|dirbuster|burpsuite|owasp zap|netsparker|appscan|arachni|skipfish|gobuster|wfuzz|hydra|metasploit|nessus|openvas|qualys|zap|w3af|openwebspider|netsparker|appspider|rapid7|nessus|qualys|nuclei|zgrab|vega|gospider|gxspider|whatweb|xspider|joomscan|uniscan|blindelephant)",
|
||||||
|
"targets": [
|
||||||
|
"HEADERS:User-Agent"
|
||||||
|
],
|
||||||
|
"severity": "CRITICAL",
|
||||||
|
"action": "block",
|
||||||
|
"score": 10,
|
||||||
|
"description": "Block traffic from known vulnerability scanners and penetration testing tools. Includes more scanners."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "block-crawlers",
|
||||||
|
"phase": 1,
|
||||||
|
"pattern": "(meta-externalagent)",
|
||||||
|
"targets": [
|
||||||
|
"HEADERS:User-Agent"
|
||||||
|
],
|
||||||
|
"severity": "CRITICAL",
|
||||||
|
"action": "block",
|
||||||
|
"score": 10,
|
||||||
|
"description": "Block traffic from web scrapers and crawlers."
|
||||||
|
}
|
||||||
|
]
|
||||||
41
active/container_elk/.env
Normal file
41
active/container_elk/.env
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
# Project namespace (defaults to the current folder name if not set)
|
||||||
|
#COMPOSE_PROJECT_NAME=myproject
|
||||||
|
|
||||||
|
|
||||||
|
# Password for the 'elastic' user (at least 6 characters)
|
||||||
|
ELASTIC_PASSWORD=changeme
|
||||||
|
|
||||||
|
|
||||||
|
# Password for the 'kibana_system' user (at least 6 characters)
|
||||||
|
KIBANA_PASSWORD=changeme
|
||||||
|
|
||||||
|
|
||||||
|
# Version of Elastic products
|
||||||
|
STACK_VERSION=8.7.1
|
||||||
|
|
||||||
|
|
||||||
|
# Set the cluster name
|
||||||
|
CLUSTER_NAME=docker-cluster
|
||||||
|
|
||||||
|
|
||||||
|
# Set to 'basic' or 'trial' to automatically start the 30-day trial
|
||||||
|
LICENSE=basic
|
||||||
|
#LICENSE=trial
|
||||||
|
|
||||||
|
|
||||||
|
# Port to expose Elasticsearch HTTP API to the host
|
||||||
|
ES_PORT=9200
|
||||||
|
|
||||||
|
|
||||||
|
# Port to expose Kibana to the host
|
||||||
|
KIBANA_PORT=5601
|
||||||
|
|
||||||
|
|
||||||
|
# Increase or decrease based on the available host memory (in bytes)
|
||||||
|
ES_MEM_LIMIT=1073741824
|
||||||
|
KB_MEM_LIMIT=1073741824
|
||||||
|
LS_MEM_LIMIT=1073741824
|
||||||
|
|
||||||
|
|
||||||
|
# SAMPLE Predefined Key only to be used in POC environments
|
||||||
|
ENCRYPTION_KEY=c34d38b3a14956121ff2170e5030b471551370178f43e5626eec58b04a30fae2
|
||||||
219
active/container_elk/elk-compose.yaml
Normal file
219
active/container_elk/elk-compose.yaml
Normal file
@@ -0,0 +1,219 @@
|
|||||||
|
version: "3.8"
|
||||||
|
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
certs:
|
||||||
|
driver: local
|
||||||
|
esdata01:
|
||||||
|
driver: local
|
||||||
|
kibanadata:
|
||||||
|
driver: local
|
||||||
|
metricbeatdata01:
|
||||||
|
driver: local
|
||||||
|
filebeatdata01:
|
||||||
|
driver: local
|
||||||
|
logstashdata01:
|
||||||
|
driver: local
|
||||||
|
|
||||||
|
|
||||||
|
networks:
|
||||||
|
default:
|
||||||
|
name: elastic
|
||||||
|
external: false
|
||||||
|
|
||||||
|
|
||||||
|
services:
|
||||||
|
setup:
|
||||||
|
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
|
||||||
|
volumes:
|
||||||
|
- certs:/usr/share/elasticsearch/config/certs
|
||||||
|
user: "0"
|
||||||
|
command: >
|
||||||
|
bash -c '
|
||||||
|
if [ x${ELASTIC_PASSWORD} == x ]; then
|
||||||
|
echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
|
||||||
|
exit 1;
|
||||||
|
elif [ x${KIBANA_PASSWORD} == x ]; then
|
||||||
|
echo "Set the KIBANA_PASSWORD environment variable in the .env file";
|
||||||
|
exit 1;
|
||||||
|
fi;
|
||||||
|
if [ ! -f config/certs/ca.zip ]; then
|
||||||
|
echo "Creating CA";
|
||||||
|
bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
|
||||||
|
unzip config/certs/ca.zip -d config/certs;
|
||||||
|
fi;
|
||||||
|
if [ ! -f config/certs/certs.zip ]; then
|
||||||
|
echo "Creating certs";
|
||||||
|
echo -ne \
|
||||||
|
"instances:\n"\
|
||||||
|
" - name: es01\n"\
|
||||||
|
" dns:\n"\
|
||||||
|
" - es01\n"\
|
||||||
|
" - localhost\n"\
|
||||||
|
" ip:\n"\
|
||||||
|
" - 127.0.0.1\n"\
|
||||||
|
" - name: kibana\n"\
|
||||||
|
" dns:\n"\
|
||||||
|
" - kibana\n"\
|
||||||
|
" - localhost\n"\
|
||||||
|
" ip:\n"\
|
||||||
|
" - 127.0.0.1\n"\
|
||||||
|
> config/certs/instances.yml;
|
||||||
|
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
|
||||||
|
unzip config/certs/certs.zip -d config/certs;
|
||||||
|
fi;
|
||||||
|
echo "Setting file permissions"
|
||||||
|
chown -R root:root config/certs;
|
||||||
|
find . -type d -exec chmod 750 \{\} \;;
|
||||||
|
find . -type f -exec chmod 640 \{\} \;;
|
||||||
|
echo "Waiting for Elasticsearch availability";
|
||||||
|
until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
|
||||||
|
echo "Setting kibana_system password";
|
||||||
|
until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
|
||||||
|
echo "All done!";
|
||||||
|
'
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
|
||||||
|
interval: 1s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 120
|
||||||
|
|
||||||
|
es01:
|
||||||
|
depends_on:
|
||||||
|
setup:
|
||||||
|
condition: service_healthy
|
||||||
|
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
|
||||||
|
labels:
|
||||||
|
co.elastic.logs/module: elasticsearch
|
||||||
|
volumes:
|
||||||
|
- certs:/usr/share/elasticsearch/config/certs
|
||||||
|
- esdata01:/usr/share/elasticsearch/data
|
||||||
|
ports:
|
||||||
|
- ${ES_PORT}:9200
|
||||||
|
environment:
|
||||||
|
- node.name=es01
|
||||||
|
- cluster.name=${CLUSTER_NAME}
|
||||||
|
- discovery.type=single-node
|
||||||
|
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
|
||||||
|
- bootstrap.memory_lock=true
|
||||||
|
- xpack.security.enabled=true
|
||||||
|
- xpack.security.http.ssl.enabled=true
|
||||||
|
- xpack.security.http.ssl.key=certs/es01/es01.key
|
||||||
|
- xpack.security.http.ssl.certificate=certs/es01/es01.crt
|
||||||
|
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
|
||||||
|
- xpack.security.transport.ssl.enabled=true
|
||||||
|
- xpack.security.transport.ssl.key=certs/es01/es01.key
|
||||||
|
- xpack.security.transport.ssl.certificate=certs/es01/es01.crt
|
||||||
|
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
|
||||||
|
- xpack.security.transport.ssl.verification_mode=certificate
|
||||||
|
- xpack.license.self_generated.type=${LICENSE}
|
||||||
|
mem_limit: ${ES_MEM_LIMIT}
|
||||||
|
ulimits:
|
||||||
|
memlock:
|
||||||
|
soft: -1
|
||||||
|
hard: -1
|
||||||
|
healthcheck:
|
||||||
|
test:
|
||||||
|
[
|
||||||
|
"CMD-SHELL",
|
||||||
|
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
|
||||||
|
]
|
||||||
|
interval: 10s
|
||||||
|
timeout: 10s
|
||||||
|
retries: 120
|
||||||
|
|
||||||
|
kibana:
|
||||||
|
depends_on:
|
||||||
|
es01:
|
||||||
|
condition: service_healthy
|
||||||
|
image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
|
||||||
|
labels:
|
||||||
|
co.elastic.logs/module: kibana
|
||||||
|
volumes:
|
||||||
|
- certs:/usr/share/kibana/config/certs
|
||||||
|
- kibanadata:/usr/share/kibana/data
|
||||||
|
ports:
|
||||||
|
- ${KIBANA_PORT}:5601
|
||||||
|
environment:
|
||||||
|
- SERVERNAME=kibana
|
||||||
|
- ELASTICSEARCH_HOSTS=https://es01:9200
|
||||||
|
- ELASTICSEARCH_USERNAME=kibana_system
|
||||||
|
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
|
||||||
|
- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
|
||||||
|
- XPACK_SECURITY_ENCRYPTIONKEY=${ENCRYPTION_KEY}
|
||||||
|
- XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY=${ENCRYPTION_KEY}
|
||||||
|
- XPACK_REPORTING_ENCRYPTIONKEY=${ENCRYPTION_KEY}
|
||||||
|
mem_limit: ${KB_MEM_LIMIT}
|
||||||
|
healthcheck:
|
||||||
|
test:
|
||||||
|
[
|
||||||
|
"CMD-SHELL",
|
||||||
|
"curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
|
||||||
|
]
|
||||||
|
interval: 10s
|
||||||
|
timeout: 10s
|
||||||
|
retries: 120
|
||||||
|
|
||||||
|
metricbeat01:
|
||||||
|
depends_on:
|
||||||
|
es01:
|
||||||
|
condition: service_healthy
|
||||||
|
kibana:
|
||||||
|
condition: service_healthy
|
||||||
|
image: docker.elastic.co/beats/metricbeat:${STACK_VERSION}
|
||||||
|
user: root
|
||||||
|
volumes:
|
||||||
|
- certs:/usr/share/metricbeat/certs
|
||||||
|
- metricbeatdata01:/usr/share/metricbeat/data
|
||||||
|
- "./metricbeat.yaml:/usr/share/metricbeat/metricbeat.yml:ro"
|
||||||
|
- "/var/run/docker.sock:/var/run/docker.sock:ro"
|
||||||
|
- "/sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro"
|
||||||
|
- "/proc:/hostfs/proc:ro"
|
||||||
|
- "/:/hostfs:ro"
|
||||||
|
environment:
|
||||||
|
- ELASTIC_USER=elastic
|
||||||
|
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
|
||||||
|
- ELASTIC_HOSTS=https://es01:9200
|
||||||
|
- KIBANA_HOSTS=http://kibana:5601
|
||||||
|
- LOGSTASH_HOSTS=http://logstash01:9600
|
||||||
|
|
||||||
|
filebeat01:
|
||||||
|
depends_on:
|
||||||
|
es01:
|
||||||
|
condition: service_healthy
|
||||||
|
image: docker.elastic.co/beats/filebeat:${STACK_VERSION}
|
||||||
|
user: root
|
||||||
|
volumes:
|
||||||
|
- certs:/usr/share/filebeat/certs
|
||||||
|
- filebeatdata01:/usr/share/filebeat/data
|
||||||
|
- "./filebeat_ingest_data/:/usr/share/filebeat/ingest_data/"
|
||||||
|
- "./filebeat.yaml:/usr/share/filebeat/filebeat.yml:ro"
|
||||||
|
- "/var/lib/docker/containers:/var/lib/docker/containers:ro"
|
||||||
|
- "/var/run/docker.sock:/var/run/docker.sock:ro"
|
||||||
|
environment:
|
||||||
|
- ELASTIC_USER=elastic
|
||||||
|
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
|
||||||
|
- ELASTIC_HOSTS=https://es01:9200
|
||||||
|
- KIBANA_HOSTS=http://kibana:5601
|
||||||
|
- LOGSTASH_HOSTS=http://logstash01:9600
|
||||||
|
|
||||||
|
logstash01:
|
||||||
|
depends_on:
|
||||||
|
es01:
|
||||||
|
condition: service_healthy
|
||||||
|
kibana:
|
||||||
|
condition: service_healthy
|
||||||
|
image: docker.elastic.co/logstash/logstash:${STACK_VERSION}
|
||||||
|
labels:
|
||||||
|
co.elastic.logs/module: logstash
|
||||||
|
user: root
|
||||||
|
volumes:
|
||||||
|
- certs:/usr/share/logstash/certs
|
||||||
|
- logstashdata01:/usr/share/logstash/data
|
||||||
|
- "./logstash_ingest_data/:/usr/share/logstash/ingest_data/"
|
||||||
|
- "./logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro"
|
||||||
|
environment:
|
||||||
|
- xpack.monitoring.enabled=false
|
||||||
|
- ELASTIC_USER=elastic
|
||||||
|
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
|
||||||
|
- ELASTIC_HOSTS=https://es01:9200
|
||||||
14
active/container_elk/elk.md
Normal file
14
active/container_elk/elk.md
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
# Elk Stack
|
||||||
|
|
||||||
|
## Install
|
||||||
|
|
||||||
|
<https://www.elastic.co/blog/getting-started-with-the-elastic-stack-and-docker-compose>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Copy over the files
|
||||||
|
scp -rp active/container_elk/. elk:elk
|
||||||
|
# SSH into the host
|
||||||
|
ssh -t elk "cd elk ; bash --login"
|
||||||
|
# Run the services
|
||||||
|
docker compose -f elk-compose.yaml up
|
||||||
|
```
|
||||||
29
active/container_elk/filebeat.yaml
Normal file
29
active/container_elk/filebeat.yaml
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
filebeat.inputs:
|
||||||
|
- type: filestream
|
||||||
|
id: default-filestream
|
||||||
|
paths:
|
||||||
|
- ingest_data/*.log
|
||||||
|
|
||||||
|
|
||||||
|
filebeat.autodiscover:
|
||||||
|
providers:
|
||||||
|
- type: docker
|
||||||
|
hints.enabled: true
|
||||||
|
|
||||||
|
|
||||||
|
processors:
|
||||||
|
- add_docker_metadata: ~
|
||||||
|
|
||||||
|
|
||||||
|
setup.kibana:
|
||||||
|
host: ${KIBANA_HOSTS}
|
||||||
|
username: ${ELASTIC_USER}
|
||||||
|
password: ${ELASTIC_PASSWORD}
|
||||||
|
|
||||||
|
|
||||||
|
output.elasticsearch:
|
||||||
|
hosts: ${ELASTIC_HOSTS}
|
||||||
|
username: ${ELASTIC_USER}
|
||||||
|
password: ${ELASTIC_PASSWORD}
|
||||||
|
ssl.enabled: true
|
||||||
|
ssl.certificate_authorities: "certs/ca/ca.crt"
|
||||||
24
active/container_elk/logstash.conf
Normal file
24
active/container_elk/logstash.conf
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
input {
|
||||||
|
file {
|
||||||
|
#https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html
|
||||||
|
#default is TAIL which assumes more data will come into the file.
|
||||||
|
#change to mode => "read" if the file is a compelte file. by default, the file will be removed once reading is complete -- backup your files if you need them.
|
||||||
|
mode => "tail"
|
||||||
|
path => "/usr/share/logstash/ingest_data/*"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
filter {
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
output {
|
||||||
|
elasticsearch {
|
||||||
|
index => "logstash-%{+YYYY.MM.dd}"
|
||||||
|
hosts=> "${ELASTIC_HOSTS}"
|
||||||
|
user=> "${ELASTIC_USER}"
|
||||||
|
password=> "${ELASTIC_PASSWORD}"
|
||||||
|
cacert=> "certs/ca/ca.crt"
|
||||||
|
}
|
||||||
|
}
|
||||||
62
active/container_elk/metricbeat.yaml
Normal file
62
active/container_elk/metricbeat.yaml
Normal file
@@ -0,0 +1,62 @@
|
|||||||
|
metricbeat.config.modules:
|
||||||
|
path: ${path.config}/modules.d/*.yml
|
||||||
|
reload.enabled: false
|
||||||
|
|
||||||
|
|
||||||
|
metricbeat.modules:
|
||||||
|
- module: elasticsearch
|
||||||
|
xpack.enabled: true
|
||||||
|
period: 10s
|
||||||
|
hosts: ${ELASTIC_HOSTS}
|
||||||
|
ssl.certificate_authorities: "certs/ca/ca.crt"
|
||||||
|
ssl.certificate: "certs/es01/es01.crt"
|
||||||
|
ssl.key: "certs/es01/es01.key"
|
||||||
|
username: ${ELASTIC_USER}
|
||||||
|
password: ${ELASTIC_PASSWORD}
|
||||||
|
ssl.enabled: true
|
||||||
|
|
||||||
|
|
||||||
|
- module: logstash
|
||||||
|
xpack.enabled: true
|
||||||
|
period: 10s
|
||||||
|
hosts: ${LOGSTASH_HOSTS}
|
||||||
|
|
||||||
|
|
||||||
|
- module: kibana
|
||||||
|
metricsets:
|
||||||
|
- stats
|
||||||
|
period: 10s
|
||||||
|
hosts: ${KIBANA_HOSTS}
|
||||||
|
username: ${ELASTIC_USER}
|
||||||
|
password: ${ELASTIC_PASSWORD}
|
||||||
|
xpack.enabled: true
|
||||||
|
|
||||||
|
|
||||||
|
- module: docker
|
||||||
|
metricsets:
|
||||||
|
- "container"
|
||||||
|
- "cpu"
|
||||||
|
- "diskio"
|
||||||
|
- "healthcheck"
|
||||||
|
- "info"
|
||||||
|
#- "image"
|
||||||
|
- "memory"
|
||||||
|
- "network"
|
||||||
|
hosts: ["unix:///var/run/docker.sock"]
|
||||||
|
period: 10s
|
||||||
|
enabled: true
|
||||||
|
|
||||||
|
|
||||||
|
processors:
|
||||||
|
- add_host_metadata: ~
|
||||||
|
- add_docker_metadata: ~
|
||||||
|
|
||||||
|
|
||||||
|
output.elasticsearch:
|
||||||
|
hosts: ${ELASTIC_HOSTS}
|
||||||
|
username: ${ELASTIC_USER}
|
||||||
|
password: ${ELASTIC_PASSWORD}
|
||||||
|
ssl:
|
||||||
|
certificate: "certs/es01/es01.crt"
|
||||||
|
certificate_authorities: "certs/ca/ca.crt"
|
||||||
|
key: "certs/es01/es01.key"
|
||||||
@@ -24,7 +24,7 @@ services:
|
|||||||
- /etc/localtime:/etc/localtime:ro
|
- /etc/localtime:/etc/localtime:ro
|
||||||
ports:
|
ports:
|
||||||
- "3000:3000"
|
- "3000:3000"
|
||||||
- "2222:22"
|
- "22:22"
|
||||||
depends_on:
|
depends_on:
|
||||||
- db
|
- db
|
||||||
|
|
||||||
|
|||||||
@@ -21,8 +21,11 @@
|
|||||||
|
|
||||||
Prereqs
|
Prereqs
|
||||||
|
|
||||||
1. Mount data dirs at `/srv/gitea-data` and `/srv/gitea-db`
|
1. Change the default SSH port for your server to 2022 (or something similar).
|
||||||
2. Create a gitea user and update gitea-compose.yaml with the correct UID
|
2. Allow SSH to bind to that port: `semanage port -a -t ssh_port_t -p tcp 2022`
|
||||||
|
3. Allow 2022 on the firewall: `firewall-cmd --add-port=2022/tcp --permanent && firewall-cmd --reload`
|
||||||
|
4. Mount data dirs at `/srv/gitea-data` and `/srv/gitea-db`
|
||||||
|
5. Create a gitea user and update gitea-compose.yaml with the correct UID
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
scp active/container_gitea/gitea-compose.yaml gitea:
|
scp active/container_gitea/gitea-compose.yaml gitea:
|
||||||
|
|||||||
3
active/container_litellm/compose/README.md
Normal file
3
active/container_litellm/compose/README.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Compose
|
||||||
|
|
||||||
|
Put your compose.yaml here.
|
||||||
37
active/container_litellm/compose/compose.yaml
Normal file
37
active/container_litellm/compose/compose.yaml
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
services:
|
||||||
|
litellm:
|
||||||
|
image: docker.litellm.ai/berriai/litellm:main-latest
|
||||||
|
ports:
|
||||||
|
- 4000:4000
|
||||||
|
env_file: /home/ai/litellm.env
|
||||||
|
environment:
|
||||||
|
DATABASE_URL: "postgresql://llmproxy:dbpassword9090@host.containers.internal:5432/litellm"
|
||||||
|
STORE_MODEL_IN_DB: "True"
|
||||||
|
restart: unless-stopped
|
||||||
|
depends_on:
|
||||||
|
- litellm-db # Indicates that this service depends on the 'litellm-db' service, ensuring 'litellm-db' starts first
|
||||||
|
healthcheck: # Defines the health check configuration for the container
|
||||||
|
test:
|
||||||
|
- CMD-SHELL
|
||||||
|
- python3 -c "import urllib.request; urllib.request.urlopen('http://localhost:4000/health/liveliness')" # Command to execute for health check
|
||||||
|
interval: 30s # Perform health check every 30 seconds
|
||||||
|
timeout: 10s # Health check command times out after 10 seconds
|
||||||
|
retries: 3 # Retry up to 3 times if health check fails
|
||||||
|
start_period: 40s # Wait 40 seconds after container start before beginning health checks
|
||||||
|
|
||||||
|
litellm-db:
|
||||||
|
image: docker.io/postgres:16
|
||||||
|
restart: always
|
||||||
|
environment:
|
||||||
|
POSTGRES_DB: litellm
|
||||||
|
POSTGRES_USER: llmproxy
|
||||||
|
POSTGRES_PASSWORD: dbpassword9090
|
||||||
|
ports:
|
||||||
|
- "5432:5432"
|
||||||
|
volumes:
|
||||||
|
- litellm_postgres_data:/var/lib/postgresql/data:z
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD-SHELL", "pg_isready -d litellm -U llmproxy"]
|
||||||
|
interval: 1s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 10
|
||||||
67
active/container_litellm/config.yaml
Normal file
67
active/container_litellm/config.yaml
Normal file
@@ -0,0 +1,67 @@
|
|||||||
|
# General settings
|
||||||
|
|
||||||
|
general_settings:
|
||||||
|
request_timeout: 600
|
||||||
|
|
||||||
|
# Models
|
||||||
|
model_list:
|
||||||
|
# Qwen3.5-35B variants
|
||||||
|
- model_name: qwen3.5-35b-think-general
|
||||||
|
litellm_params:
|
||||||
|
model: openai/qwen3.5-35b-a3b
|
||||||
|
api_base: https://llama-cpp.reeselink.com
|
||||||
|
api_key: none
|
||||||
|
temperature: 1.0
|
||||||
|
top_p: 0.95
|
||||||
|
presence_penalty: 1.5
|
||||||
|
extra_body:
|
||||||
|
top_k: 20
|
||||||
|
min_p: 0.0
|
||||||
|
repetition_penalty: 1.0
|
||||||
|
chat_template_kwargs:
|
||||||
|
enable_thinking: true
|
||||||
|
|
||||||
|
- model_name: qwen3.5-35b-think-code
|
||||||
|
litellm_params:
|
||||||
|
model: openai/qwen3.5-35b-a3b
|
||||||
|
api_base: https://llama-cpp.reeselink.com
|
||||||
|
api_key: none
|
||||||
|
temperature: 0.6
|
||||||
|
top_p: 0.95
|
||||||
|
presence_penalty: 0.0
|
||||||
|
extra_body:
|
||||||
|
top_k: 20
|
||||||
|
min_p: 0.0
|
||||||
|
repetition_penalty: 1.0
|
||||||
|
chat_template_kwargs:
|
||||||
|
enable_thinking: true
|
||||||
|
|
||||||
|
- model_name: qwen3.5-35b-instruct-general
|
||||||
|
litellm_params:
|
||||||
|
model: openai/qwen3.5-35b-a3b
|
||||||
|
api_base: https://llama-cpp.reeselink.com
|
||||||
|
api_key: none
|
||||||
|
temperature: 0.7
|
||||||
|
top_p: 0.8
|
||||||
|
presence_penalty: 1.5
|
||||||
|
extra_body:
|
||||||
|
top_k: 20
|
||||||
|
min_p: 0.0
|
||||||
|
repetition_penalty: 1.0
|
||||||
|
chat_template_kwargs:
|
||||||
|
enable_thinking: false
|
||||||
|
|
||||||
|
- model_name: qwen3.5-35b-instruct-reasoning
|
||||||
|
litellm_params:
|
||||||
|
model: openai/qwen3.5-35b-a3b
|
||||||
|
api_base: https://llama-cpp.reeselink.com
|
||||||
|
api_key: none
|
||||||
|
temperature: 1.0
|
||||||
|
top_p: 0.95
|
||||||
|
presence_penalty: 1.5
|
||||||
|
extra_body:
|
||||||
|
top_k: 20
|
||||||
|
min_p: 0.0
|
||||||
|
repetition_penalty: 1.0
|
||||||
|
chat_template_kwargs:
|
||||||
|
enable_thinking: false
|
||||||
233
active/container_litellm/litellm.md
Normal file
233
active/container_litellm/litellm.md
Normal file
@@ -0,0 +1,233 @@
|
|||||||
|
# Podman litellm
|
||||||
|
|
||||||
|
- [Podman litellm](#podman-litellm)
|
||||||
|
- [Setup litellm Project](#setup-litellm-project)
|
||||||
|
- [Install litellm](#install-litellm)
|
||||||
|
- [Create the ai user](#create-the-ai-user)
|
||||||
|
- [Write the litellm compose spec](#write-the-litellm-compose-spec)
|
||||||
|
- [A Note on Volumes](#a-note-on-volumes)
|
||||||
|
- [Convert litellm compose spec to quadlets](#convert-litellm-compose-spec-to-quadlets)
|
||||||
|
- [Create the litellm.env file](#create-the-litellmenv-file)
|
||||||
|
- [Start and enable your systemd quadlet](#start-and-enable-your-systemd-quadlet)
|
||||||
|
- [Expose litellm](#expose-litellm)
|
||||||
|
- [Using LiteLLM](#using-litellm)
|
||||||
|
- [Adding Models](#adding-models)
|
||||||
|
- [Testing Models](#testing-models)
|
||||||
|
- [Backup litellm](#backup-litellm)
|
||||||
|
- [Upgrade litellm](#upgrade-litellm)
|
||||||
|
- [Upgrade Quadlets](#upgrade-quadlets)
|
||||||
|
- [Uninstall](#uninstall)
|
||||||
|
- [Notes](#notes)
|
||||||
|
- [SELinux](#selinux)
|
||||||
|
|
||||||
|
## Setup litellm Project
|
||||||
|
|
||||||
|
- [ ] Copy and rename this folder to active/container_litellm
|
||||||
|
- [ ] Find and replace litellm with the name of the service.
|
||||||
|
- [ ] Create the rootless user to run the podman containers
|
||||||
|
- [ ] Write the compose.yaml spec for your service
|
||||||
|
- [ ] Convert the compose.yaml spec to a quadlet
|
||||||
|
- [ ] Install the quadlet on the podman server
|
||||||
|
- [ ] Expose the quadlet service
|
||||||
|
- [ ] Install a backup service and timer
|
||||||
|
|
||||||
|
## Install litellm
|
||||||
|
|
||||||
|
### Create the ai user
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# SSH into your podman server as root
|
||||||
|
useradd ai
|
||||||
|
loginctl enable-linger $(id -u ai)
|
||||||
|
systemctl --user --machine=ai@.host enable podman-restart
|
||||||
|
systemctl --user --machine=ai@.host enable --now podman.socket
|
||||||
|
mkdir -p /home/ai/.config/containers/systemd
|
||||||
|
```
|
||||||
|
|
||||||
|
### Write the litellm compose spec
|
||||||
|
|
||||||
|
See the [docker run command here](https://docs.litellm.ai/docs/proxy/docker_quick_start#32-start-proxy)
|
||||||
|
|
||||||
|
Edit the compose.yaml at active/container_litellm/compose/compose.yaml
|
||||||
|
|
||||||
|
#### A Note on Volumes
|
||||||
|
|
||||||
|
Named volumes are stored at `/home/litellm/.local/share/containers/storage/volumes/`.
|
||||||
|
|
||||||
|
### Convert litellm compose spec to quadlets
|
||||||
|
|
||||||
|
Run the following to convert a compose.yaml into the various `.container` files for systemd:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate the systemd service
|
||||||
|
podman run \
|
||||||
|
--security-opt label=disable \
|
||||||
|
--rm \
|
||||||
|
-v $(pwd)/active/container_litellm/compose:/compose \
|
||||||
|
-v $(pwd)/active/container_litellm/quadlets:/quadlets \
|
||||||
|
quay.io/k9withabone/podlet \
|
||||||
|
-f /quadlets \
|
||||||
|
-i \
|
||||||
|
--overwrite \
|
||||||
|
compose /compose/compose.yaml
|
||||||
|
|
||||||
|
# Copy the files to the server
|
||||||
|
export PODMAN_SERVER=ai-ai
|
||||||
|
scp -r active/container_litellm/quadlets/. $PODMAN_SERVER:/home/ai/.config/containers/systemd/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Create the litellm.env file
|
||||||
|
|
||||||
|
Should look something like:
|
||||||
|
|
||||||
|
```env
|
||||||
|
LITELLM_MASTER_KEY="random-string"
|
||||||
|
LITELLM_SALT_KEY="random-string"
|
||||||
|
|
||||||
|
UI_USERNAME="admin"
|
||||||
|
UI_PASSWORD="random-string"
|
||||||
|
```
|
||||||
|
|
||||||
|
Then copy it to the server
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export PODMAN_SERVER=ai
|
||||||
|
scp -r active/container_litellm/config.yaml $PODMAN_SERVER:/home/ai/litellm_config.yaml
|
||||||
|
ssh $PODMAN_SERVER chown -R ai:ai /home/ai/litellm_config.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Start and enable your systemd quadlet
|
||||||
|
|
||||||
|
SSH into your podman server as root:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh ai
|
||||||
|
machinectl shell ai@
|
||||||
|
systemctl --user daemon-reload
|
||||||
|
systemctl --user restart litellm
|
||||||
|
journalctl --user -u litellm -f
|
||||||
|
# Enable auto-update service which will pull new container images automatically every day
|
||||||
|
systemctl --user enable --now podman-auto-update.timer
|
||||||
|
```
|
||||||
|
|
||||||
|
### Expose litellm
|
||||||
|
|
||||||
|
1. If you need a domain, follow the [DDNS instructions](/active/container_ddns/ddns.md#install-a-new-ddns-service)
|
||||||
|
2. For a web service, follow the [Caddy instructions](/active/container_caddy/caddy.md#adding-a-new-caddy-record)
|
||||||
|
3. Finally, follow your OS's guide for opening ports via its firewall service.
|
||||||
|
|
||||||
|
## Using LiteLLM
|
||||||
|
|
||||||
|
### Adding Models
|
||||||
|
|
||||||
|
```json
|
||||||
|
// qwen3.5-35b-a3b-thinking
|
||||||
|
{
|
||||||
|
"temperature": 1,
|
||||||
|
"top_p": 0.95,
|
||||||
|
"presence_penalty": 1.5,
|
||||||
|
"extra_body": {
|
||||||
|
"top_k": 20,
|
||||||
|
"min_p": 0,
|
||||||
|
"repetition_penalty": 1,
|
||||||
|
"chat_template_kwargs": {
|
||||||
|
"enable_thinking": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// qwen3.5-35b-a3b-coding
|
||||||
|
{
|
||||||
|
"temperature": 0.6,
|
||||||
|
"top_p": 0.95,
|
||||||
|
"presence_penalty": 0,
|
||||||
|
"extra_body": {
|
||||||
|
"top_k": 20,
|
||||||
|
"min_p": 0,
|
||||||
|
"repetition_penalty": 1,
|
||||||
|
"chat_template_kwargs": {
|
||||||
|
"enable_thinking": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// qwen3.5-35b-a3b-instruct
|
||||||
|
{
|
||||||
|
"temperature": 0.7,
|
||||||
|
"top_p": 0.8,
|
||||||
|
"presence_penalty": 1.5,
|
||||||
|
"extra_body": {
|
||||||
|
"top_k": 20,
|
||||||
|
"min_p": 0,
|
||||||
|
"repetition_penalty": 1,
|
||||||
|
"chat_template_kwargs": {
|
||||||
|
"enable_thinking": false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Testing Models
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List models
|
||||||
|
curl -L -X GET 'https://aipi.reeseapps.com/v1/models' \
|
||||||
|
-H 'Content-Type: application/json' \
|
||||||
|
-H 'Authorization: Bearer sk-1234'
|
||||||
|
|
||||||
|
curl -L -X POST 'https://aipi.reeseapps.com/v1/chat/completions' \
|
||||||
|
-H 'Content-Type: application/json' \
|
||||||
|
-H 'Authorization: Bearer sk-1234' \
|
||||||
|
-d '{
|
||||||
|
"model": "gpt-4o-mini", # 👈 REPLACE with 'public model name' for any db-model
|
||||||
|
"messages": [
|
||||||
|
{
|
||||||
|
"content": "Hey, how's it going",
|
||||||
|
"role": "user"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
## Backup litellm
|
||||||
|
|
||||||
|
Follow the [Borg Backup instructions](/active/systemd_borg/borg.md#set-up-a-client-for-backup)
|
||||||
|
|
||||||
|
## Upgrade litellm
|
||||||
|
|
||||||
|
### Upgrade Quadlets
|
||||||
|
|
||||||
|
Upgrades should be a repeat of [writing the compose spec](#convert-litellm-compose-spec-to-quadlets) and [installing the quadlets](#start-and-enable-your-systemd-quadlet)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export PODMAN_SERVER=
|
||||||
|
scp -r quadlets/. $PODMAN_SERVER$:/home/litellm/.config/containers/systemd/
|
||||||
|
ssh litellm systemctl --user daemon-reload
|
||||||
|
ssh litellm systemctl --user restart litellm
|
||||||
|
```
|
||||||
|
|
||||||
|
## Uninstall
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Stop the user's services
|
||||||
|
systemctl --user disable podman-restart
|
||||||
|
podman container stop --all
|
||||||
|
systemctl --user disable --now podman.socket
|
||||||
|
systemctl --user disable --now podman-auto-update.timer
|
||||||
|
|
||||||
|
# Delete the user (this won't delete their home directory)
|
||||||
|
# userdel might spit out an error like:
|
||||||
|
# userdel: user litellm is currently used by process 591255
|
||||||
|
# kill those processes and try again
|
||||||
|
userdel litellm
|
||||||
|
```
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
### SELinux
|
||||||
|
|
||||||
|
<https://blog.christophersmart.com/2021/01/31/podman-volumes-and-selinux/>
|
||||||
|
|
||||||
|
:z allows a container to share a mounted volume with all other containers.
|
||||||
|
|
||||||
|
:Z allows a container to reserve a mounted volume and prevents any other container from accessing.
|
||||||
15
active/container_litellm/quadlets/litellm-db.container
Normal file
15
active/container_litellm/quadlets/litellm-db.container
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
[Container]
|
||||||
|
Environment=POSTGRES_DB=litellm POSTGRES_USER=llmproxy POSTGRES_PASSWORD=dbpassword9090
|
||||||
|
HealthCmd='pg_isready -d litellm -U llmproxy'
|
||||||
|
HealthInterval=1s
|
||||||
|
HealthRetries=10
|
||||||
|
HealthTimeout=5s
|
||||||
|
Image=docker.io/postgres:16
|
||||||
|
PublishPort=5432:5432
|
||||||
|
Volume=litellm_postgres_data:/var/lib/postgresql/data:z
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Restart=always
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=default.target
|
||||||
19
active/container_litellm/quadlets/litellm.container
Normal file
19
active/container_litellm/quadlets/litellm.container
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
[Unit]
|
||||||
|
Requires=litellm-db.service
|
||||||
|
|
||||||
|
[Container]
|
||||||
|
Environment=DATABASE_URL=postgresql://llmproxy:dbpassword9090@host.containers.internal:5432/litellm STORE_MODEL_IN_DB=True
|
||||||
|
EnvironmentFile=/home/ai/litellm.env
|
||||||
|
HealthCmd="python3 -c \"import urllib.request; urllib.request.urlopen('http://localhost:4000/health/liveliness')\""
|
||||||
|
HealthInterval=30s
|
||||||
|
HealthRetries=3
|
||||||
|
HealthStartPeriod=40s
|
||||||
|
HealthTimeout=10s
|
||||||
|
Image=docker.litellm.ai/berriai/litellm:main-latest
|
||||||
|
PublishPort=4000:4000
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Restart=always
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=default.target
|
||||||
@@ -1,46 +0,0 @@
|
|||||||
user nginx;
|
|
||||||
worker_processes auto;
|
|
||||||
|
|
||||||
error_log /var/log/nginx/error.log notice;
|
|
||||||
pid /var/run/nginx.pid;
|
|
||||||
|
|
||||||
events {
|
|
||||||
worker_connections 1024;
|
|
||||||
}
|
|
||||||
|
|
||||||
stream {
|
|
||||||
log_format stream_logs '$remote_addr [$time_local] $protocol $status $bytes_sent $bytes_received $session_time "$upstream_addr"';
|
|
||||||
|
|
||||||
access_log /dev/stdout stream_logs;
|
|
||||||
error_log stderr info;
|
|
||||||
|
|
||||||
server {
|
|
||||||
listen 3478;
|
|
||||||
listen [::]:3478;
|
|
||||||
proxy_pass nextcloud.reeselink.com:3478;
|
|
||||||
}
|
|
||||||
|
|
||||||
server {
|
|
||||||
listen 2222;
|
|
||||||
listen [::]:2222;
|
|
||||||
proxy_pass gitea.reeselink.com:2222;
|
|
||||||
}
|
|
||||||
|
|
||||||
server {
|
|
||||||
listen 8080;
|
|
||||||
listen [::]:8080;
|
|
||||||
proxy_pass unifi-external.reeselink.com:2222;
|
|
||||||
}
|
|
||||||
|
|
||||||
server {
|
|
||||||
listen 25565;
|
|
||||||
listen [::]:25565;
|
|
||||||
proxy_pass minecraft.reeselink.com:25565;
|
|
||||||
}
|
|
||||||
|
|
||||||
server {
|
|
||||||
listen 25566;
|
|
||||||
listen [::]:25566;
|
|
||||||
proxy_pass minecraft.reeselink.com:25566;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -2,6 +2,35 @@
|
|||||||
|
|
||||||
## Initial Install
|
## Initial Install
|
||||||
|
|
||||||
|
Create your initial `secrets/nginx.conf` to look something like:
|
||||||
|
|
||||||
|
```conf
|
||||||
|
user nginx;
|
||||||
|
worker_processes auto;
|
||||||
|
|
||||||
|
error_log /var/log/nginx/error.log notice;
|
||||||
|
pid /var/run/nginx.pid;
|
||||||
|
|
||||||
|
events {
|
||||||
|
worker_connections 1024;
|
||||||
|
}
|
||||||
|
|
||||||
|
stream {
|
||||||
|
log_format stream_logs '$remote_addr [$time_local] $protocol $status $bytes_sent $bytes_received $session_time "$upstream_addr"';
|
||||||
|
|
||||||
|
access_log /dev/stdout stream_logs;
|
||||||
|
error_log stderr info;
|
||||||
|
|
||||||
|
server {
|
||||||
|
listen 25565;
|
||||||
|
listen [::]:25565;
|
||||||
|
proxy_pass my-minecraft-server.internal.dns:25565;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Create the systemd service:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Get the initial configuration
|
# Get the initial configuration
|
||||||
vim /etc/containers/systemd/nginx.container
|
vim /etc/containers/systemd/nginx.container
|
||||||
@@ -26,11 +55,27 @@ Restart=always
|
|||||||
WantedBy=default.target
|
WantedBy=default.target
|
||||||
```
|
```
|
||||||
|
|
||||||
## Update the Configuration
|
Reload the service and start it:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
scp active/container_nginx/nginx.conf proxy:/etc/nginx/nginx.conf
|
|
||||||
ssh proxy
|
|
||||||
systemctl daemon-reload
|
systemctl daemon-reload
|
||||||
systemctl start nginx
|
systemctl start nginx
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Update the Configuration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
scp active/container_nginx/secrets/nginx.conf proxy:/etc/nginx/nginx.conf
|
||||||
|
ssh proxy
|
||||||
|
systemctl restart nginx
|
||||||
|
```
|
||||||
|
|
||||||
|
## Logs
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Watch client connections
|
||||||
|
journalctl -u nginx -f | grep -e 'client .* connected'
|
||||||
|
|
||||||
|
# Watch upstream proxy connections
|
||||||
|
journalctl -u nginx -f | grep -e 'proxy .* connected'
|
||||||
|
```
|
||||||
|
|||||||
3
active/container_rabbitmq/compose/README.md
Normal file
3
active/container_rabbitmq/compose/README.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Compose
|
||||||
|
|
||||||
|
Put your compose.yaml here.
|
||||||
11
active/container_rabbitmq/compose/compose.yaml
Normal file
11
active/container_rabbitmq/compose/compose.yaml
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
services:
|
||||||
|
rabbitmq:
|
||||||
|
container_name: rabbitmq
|
||||||
|
restart: always
|
||||||
|
image: docker.io/rabbitmq:3-management
|
||||||
|
ports:
|
||||||
|
- 15672:15672
|
||||||
|
- 5672:5672
|
||||||
|
env_file: /home/rabbitmq/rabbitmq.env
|
||||||
|
volumes:
|
||||||
|
- /home/rabbitmq/data:/var/lib/rabbitmq
|
||||||
2
active/container_rabbitmq/example.env
Normal file
2
active/container_rabbitmq/example.env
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
RABBITMQ_DEFAULT_USER=user
|
||||||
|
RABBITMQ_DEFAULT_PASS=password
|
||||||
196
active/container_rabbitmq/foobar.md
Normal file
196
active/container_rabbitmq/foobar.md
Normal file
@@ -0,0 +1,196 @@
|
|||||||
|
# Podman rabbitmq
|
||||||
|
|
||||||
|
- [Podman rabbitmq](#podman-rabbitmq)
|
||||||
|
- [Setup rabbitmq Project](#setup-rabbitmq-project)
|
||||||
|
- [Install rabbitmq](#install-rabbitmq)
|
||||||
|
- [Create the rabbitmq user](#create-the-rabbitmq-user)
|
||||||
|
- [Generate the rabbitmq tls certs](#generate-the-rabbitmq-tls-certs)
|
||||||
|
- [Write the rabbitmq compose spec](#write-the-rabbitmq-compose-spec)
|
||||||
|
- [A Note on Volumes](#a-note-on-volumes)
|
||||||
|
- [Convert rabbitmq compose spec to quadlets](#convert-rabbitmq-compose-spec-to-quadlets)
|
||||||
|
- [Create any container-mounted directories](#create-any-container-mounted-directories)
|
||||||
|
- [Start and enable your systemd quadlet](#start-and-enable-your-systemd-quadlet)
|
||||||
|
- [Alias rabbitmqctl](#alias-rabbitmqctl)
|
||||||
|
- [Expose rabbitmq](#expose-rabbitmq)
|
||||||
|
- [firewalld](#firewalld)
|
||||||
|
- [Backup rabbitmq](#backup-rabbitmq)
|
||||||
|
- [Upgrade rabbitmq](#upgrade-rabbitmq)
|
||||||
|
- [Upgrade Quadlets](#upgrade-quadlets)
|
||||||
|
- [Uninstall](#uninstall)
|
||||||
|
- [Notes](#notes)
|
||||||
|
- [SELinux](#selinux)
|
||||||
|
|
||||||
|
## Setup rabbitmq Project
|
||||||
|
|
||||||
|
- [x] Copy and rename this folder to active/container_rabbitmq
|
||||||
|
- [x] Find and replace rabbitmq with the name of the service.
|
||||||
|
- [ ] Create the rootless user to run the podman containers
|
||||||
|
- [ ] Write the compose.yaml spec for your service
|
||||||
|
- [ ] Convert the compose.yaml spec to a quadlet
|
||||||
|
- [ ] Install the quadlet on the podman server
|
||||||
|
- [ ] Expose the quadlet service
|
||||||
|
- [ ] Install a backup service and timer
|
||||||
|
|
||||||
|
## Install rabbitmq
|
||||||
|
|
||||||
|
### Create the rabbitmq user
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# SSH into your podman server as root
|
||||||
|
useradd rabbitmq
|
||||||
|
loginctl enable-linger $(id -u rabbitmq)
|
||||||
|
systemctl --user --machine=rabbitmq@.host enable podman-restart
|
||||||
|
systemctl --user --machine=rabbitmq@.host enable --now podman.socket
|
||||||
|
mkdir -p /home/rabbitmq/.config/containers/systemd
|
||||||
|
```
|
||||||
|
|
||||||
|
### Generate the rabbitmq tls certs
|
||||||
|
|
||||||
|
We'll use tls authentication to ensure encryption between our servers and clients.
|
||||||
|
|
||||||
|
<https://www.rabbitmq.com/docs/ssl#automated-certificate-generation-transcript>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh rabbitmq
|
||||||
|
git clone https://github.com/rabbitmq/tls-gen tls-gen
|
||||||
|
cd tls-gen/basic
|
||||||
|
# private key password
|
||||||
|
make PASSWORD=bunnies
|
||||||
|
make verify
|
||||||
|
make info
|
||||||
|
ls -l ./result
|
||||||
|
```
|
||||||
|
|
||||||
|
### Write the rabbitmq compose spec
|
||||||
|
|
||||||
|
Edit the compose.yaml at active/container_rabbitmq/compose/compose.yaml
|
||||||
|
|
||||||
|
#### A Note on Volumes
|
||||||
|
|
||||||
|
Named volumes are stored at `/home/rabbitmq/.local/share/containers/storage/volumes/`.
|
||||||
|
|
||||||
|
### Convert rabbitmq compose spec to quadlets
|
||||||
|
|
||||||
|
Run the following to convert a compose.yaml into the various `.container` files for systemd:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate the systemd service
|
||||||
|
podman run \
|
||||||
|
--security-opt label=disable \
|
||||||
|
--rm \
|
||||||
|
-v $(pwd)/active/container_rabbitmq/compose:/compose \
|
||||||
|
-v $(pwd)/active/container_rabbitmq/quadlets:/quadlets \
|
||||||
|
quay.io/k9withabone/podlet \
|
||||||
|
-f /quadlets \
|
||||||
|
-i \
|
||||||
|
--overwrite \
|
||||||
|
compose /compose/compose.yaml
|
||||||
|
|
||||||
|
# Copy the files to the server
|
||||||
|
export PODMAN_SERVER=rabbitmq
|
||||||
|
scp -r active/container_rabbitmq/quadlets/. $PODMAN_SERVER:/home/rabbitmq/.config/containers/systemd/
|
||||||
|
ssh $PODMAN_SERVER chown -R rabbitmq:rabbitmq /home/rabbitmq/.config/containers/systemd/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Create any container-mounted directories
|
||||||
|
|
||||||
|
SSH into your podman server as root:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
machinectl shell rabbitmq@
|
||||||
|
podman unshare
|
||||||
|
# /var/lib/rabbitmq
|
||||||
|
mkdir data
|
||||||
|
# Chown to the namespaced user with UID 1000
|
||||||
|
# This will be some really obscure UID outside the namespace
|
||||||
|
# This will also solve most permission denied errors
|
||||||
|
chown -R 1000:1000 some_volume
|
||||||
|
```
|
||||||
|
|
||||||
|
### Start and enable your systemd quadlet
|
||||||
|
|
||||||
|
SSH into your podman server as root:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
machinectl shell rabbitmq@
|
||||||
|
systemctl --user daemon-reload
|
||||||
|
systemctl --user restart rabbitmq
|
||||||
|
# Enable auto-update service which will pull new container images automatically every day
|
||||||
|
systemctl --user enable --now podman-auto-update.timer
|
||||||
|
```
|
||||||
|
|
||||||
|
### Alias rabbitmqctl
|
||||||
|
|
||||||
|
We'll use containers to run rabbitmqctl, so we'll add an alias to our `.bashrc`
|
||||||
|
to make things easier:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
alias rabbitmqctl='podman exec -it rabbitmq rabbitmqctl'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Expose rabbitmq
|
||||||
|
|
||||||
|
1. If you need a domain, follow the [DDNS instructions](/active/container_ddns/ddns.md#install-a-new-ddns-service)
|
||||||
|
2. For a web service, follow the [Caddy instructions](/active/container_caddy/caddy.md#adding-a-new-caddy-record)
|
||||||
|
3. Finally, follow your OS's guide for opening ports via its firewall service.
|
||||||
|
|
||||||
|
#### firewalld
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# command to get current active zone and default zone
|
||||||
|
firewall-cmd --get-active-zones
|
||||||
|
firewall-cmd --get-default-zone
|
||||||
|
|
||||||
|
# command to open 443 on tcp
|
||||||
|
firewall-cmd --permanent --zone=<zone> --add-port=443/tcp
|
||||||
|
|
||||||
|
# command to open 80 and 443 on tcp and udp
|
||||||
|
firewall-cmd --permanent --zone=<zone> --add-port={80,443}/{tcp,udp}
|
||||||
|
|
||||||
|
# command to list available services and then open http and https
|
||||||
|
firewall-cmd --get-services
|
||||||
|
firewall-cmd --permanent --zone=<zone> --add-service={http,https}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Backup rabbitmq
|
||||||
|
|
||||||
|
Follow the [Borg Backup instructions](/active/systemd_borg/borg.md#set-up-a-client-for-backup)
|
||||||
|
|
||||||
|
## Upgrade rabbitmq
|
||||||
|
|
||||||
|
### Upgrade Quadlets
|
||||||
|
|
||||||
|
Upgrades should be a repeat of [writing the compose spec](#convert-rabbitmq-compose-spec-to-quadlets) and [installing the quadlets](#start-and-enable-your-systemd-quadlet)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export PODMAN_SERVER=
|
||||||
|
scp -r quadlets/. $PODMAN_SERVER$:/home/rabbitmq/.config/containers/systemd/
|
||||||
|
ssh rabbitmq systemctl --user daemon-reload
|
||||||
|
ssh rabbitmq systemctl --user restart rabbitmq
|
||||||
|
```
|
||||||
|
|
||||||
|
## Uninstall
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Stop the user's services
|
||||||
|
systemctl --user disable podman-restart
|
||||||
|
podman container stop --all
|
||||||
|
systemctl --user disable --now podman.socket
|
||||||
|
systemctl --user disable --now podman-auto-update.timer
|
||||||
|
|
||||||
|
# Delete the user (this won't delete their home directory)
|
||||||
|
# userdel might spit out an error like:
|
||||||
|
# userdel: user rabbitmq is currently used by process 591255
|
||||||
|
# kill those processes and try again
|
||||||
|
userdel rabbitmq
|
||||||
|
```
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
### SELinux
|
||||||
|
|
||||||
|
<https://blog.christophersmart.com/2021/01/31/podman-volumes-and-selinux/>
|
||||||
|
|
||||||
|
:z allows a container to share a mounted volume with all other containers.
|
||||||
|
|
||||||
|
:Z allows a container to reserve a mounted volume and prevents any other container from accessing.
|
||||||
12
active/container_rabbitmq/quadlets/rabbitmq.container
Normal file
12
active/container_rabbitmq/quadlets/rabbitmq.container
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
[Container]
|
||||||
|
ContainerName=rabbitmq
|
||||||
|
EnvironmentFile=/srv/rabbitmq/rabbitmq.env
|
||||||
|
Image=docker.io/rabbitmq:3-management
|
||||||
|
PublishPort=15672:15672
|
||||||
|
PublishPort=5672:5672
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Restart=always
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=default.target
|
||||||
108
active/container_rabbitmq/rabbitmq.md
Normal file
108
active/container_rabbitmq/rabbitmq.md
Normal file
@@ -0,0 +1,108 @@
|
|||||||
|
# Podman rabbitmq
|
||||||
|
|
||||||
|
- [Podman rabbitmq](#podman-rabbitmq)
|
||||||
|
- [Setup rabbitmq Project](#setup-rabbitmq-project)
|
||||||
|
- [Install rabbitmq](#install-rabbitmq)
|
||||||
|
- [Expose rabbitmq](#expose-rabbitmq)
|
||||||
|
- [firewalld](#firewalld)
|
||||||
|
- [Backup rabbitmq](#backup-rabbitmq)
|
||||||
|
- [Upgrade rabbitmq](#upgrade-rabbitmq)
|
||||||
|
- [Upgrade Quadlets](#upgrade-quadlets)
|
||||||
|
- [Uninstall](#uninstall)
|
||||||
|
- [Notes](#notes)
|
||||||
|
- [SELinux](#selinux)
|
||||||
|
|
||||||
|
## Setup rabbitmq Project
|
||||||
|
|
||||||
|
- [x] Copy and rename this folder to active/container_foobar
|
||||||
|
- [x] Find and replace rabbitmq with the name of the service.
|
||||||
|
- [ ] Create the rootless user to run the podman containers
|
||||||
|
- [ ] Write the compose.yaml spec for your service
|
||||||
|
- [ ] Convert the compose.yaml spec to a quadlet
|
||||||
|
- [ ] Install the quadlet on the podman server
|
||||||
|
- [ ] Expose the quadlet service
|
||||||
|
- [ ] Install a backup service and timer
|
||||||
|
|
||||||
|
## Install rabbitmq
|
||||||
|
|
||||||
|
<https://hub.docker.com/_/rabbitmq/>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh rabbitmq mkdir /srv/rabbitmq
|
||||||
|
scp active/container_rabbitmq/example.env rabbitmq:/srv/rabbitmq/rabbitmq.env
|
||||||
|
scp active/container_rabbitmq/rabbitmq-compose.yaml rabbitmq:
|
||||||
|
ssh rabbitmq docker compose -f rabbitmq-compose.yaml up
|
||||||
|
```
|
||||||
|
|
||||||
|
List queues
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker exec -it rabbitmq rabbitmqctl list_queues
|
||||||
|
```
|
||||||
|
|
||||||
|
### Expose rabbitmq
|
||||||
|
|
||||||
|
1. If you need a domain, follow the [DDNS instructions](/active/container_ddns/ddns.md#install-a-new-ddns-service)
|
||||||
|
2. For a web service, follow the [Caddy instructions](/active/container_caddy/caddy.md#adding-a-new-caddy-record)
|
||||||
|
3. Finally, follow your OS's guide for opening ports via its firewall service.
|
||||||
|
|
||||||
|
#### firewalld
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# command to get current active zone and default zone
|
||||||
|
firewall-cmd --get-active-zones
|
||||||
|
firewall-cmd --get-default-zone
|
||||||
|
|
||||||
|
# command to open 443 on tcp
|
||||||
|
firewall-cmd --permanent --zone=<zone> --add-port=443/tcp
|
||||||
|
|
||||||
|
# command to open 80 and 443 on tcp and udp
|
||||||
|
firewall-cmd --permanent --zone=<zone> --add-port={80,443}/{tcp,udp}
|
||||||
|
|
||||||
|
# command to list available services and then open http and https
|
||||||
|
firewall-cmd --get-services
|
||||||
|
firewall-cmd --permanent --zone=<zone> --add-service={http,https}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Backup rabbitmq
|
||||||
|
|
||||||
|
Follow the [Borg Backup instructions](/active/systemd_borg/borg.md#set-up-a-client-for-backup)
|
||||||
|
|
||||||
|
## Upgrade rabbitmq
|
||||||
|
|
||||||
|
### Upgrade Quadlets
|
||||||
|
|
||||||
|
Upgrades should be a repeat of [writing the compose spec](#convert-rabbitmq-compose-spec-to-quadlets) and [installing the quadlets](#start-and-enable-your-systemd-quadlet)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export PODMAN_SERVER=
|
||||||
|
scp -r quadlets/. $PODMAN_SERVER$:/home/rabbitmq/.config/containers/systemd/
|
||||||
|
ssh rabbitmq systemctl --user daemon-reload
|
||||||
|
ssh rabbitmq systemctl --user restart rabbitmq
|
||||||
|
```
|
||||||
|
|
||||||
|
## Uninstall
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Stop the user's services
|
||||||
|
systemctl --user disable podman-restart
|
||||||
|
podman container stop --all
|
||||||
|
systemctl --user disable --now podman.socket
|
||||||
|
systemctl --user disable --now podman-auto-update.timer
|
||||||
|
|
||||||
|
# Delete the user (this won't delete their home directory)
|
||||||
|
# userdel might spit out an error like:
|
||||||
|
# userdel: user rabbitmq is currently used by process 591255
|
||||||
|
# kill those processes and try again
|
||||||
|
userdel rabbitmq
|
||||||
|
```
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
### SELinux
|
||||||
|
|
||||||
|
<https://blog.christophersmart.com/2021/01/31/podman-volumes-and-selinux/>
|
||||||
|
|
||||||
|
:z allows a container to share a mounted volume with all other containers.
|
||||||
|
|
||||||
|
:Z allows a container to reserve a mounted volume and prevents any other container from accessing.
|
||||||
@@ -5,6 +5,7 @@
|
|||||||
- [Important Locations](#important-locations)
|
- [Important Locations](#important-locations)
|
||||||
- [Monitoring Scripts](#monitoring-scripts)
|
- [Monitoring Scripts](#monitoring-scripts)
|
||||||
- [Quick Ansible Commands](#quick-ansible-commands)
|
- [Quick Ansible Commands](#quick-ansible-commands)
|
||||||
|
- [Quickstart VM](#quickstart-vm)
|
||||||
- [Disk Mounts](#disk-mounts)
|
- [Disk Mounts](#disk-mounts)
|
||||||
- [Disk Performance Testing](#disk-performance-testing)
|
- [Disk Performance Testing](#disk-performance-testing)
|
||||||
- [General VM Notes](#general-vm-notes)
|
- [General VM Notes](#general-vm-notes)
|
||||||
@@ -45,6 +46,35 @@ ansible-playbook -i ansible/inventory.yaml -l proxy active/container_caddy/insta
|
|||||||
ansible-playbook -i ansible/inventory.yaml -l proxy active/container_ddns/install_ddns.yaml
|
ansible-playbook -i ansible/inventory.yaml -l proxy active/container_ddns/install_ddns.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Quickstart VM
|
||||||
|
|
||||||
|
Default user: `ducoterra`
|
||||||
|
Default password: `osbuild`
|
||||||
|
|
||||||
|
- [ ] `passwd ducoterra`
|
||||||
|
- [ ] `hostnamectl hostname <hostname>`
|
||||||
|
- [ ] Updates
|
||||||
|
- [ ] Static IP and DNS address
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Convert the build to raw
|
||||||
|
qemu-img convert -f qcow2 -O raw \
|
||||||
|
/srv/smb/pool0/ducoterra/images/builds/fedora-43-base.qcow2 \
|
||||||
|
/srv/vm/pool1/fedora-boot.raw
|
||||||
|
|
||||||
|
# Install (Change password for default user ducoterra!)
|
||||||
|
virt-install \
|
||||||
|
--boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \
|
||||||
|
--cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \
|
||||||
|
--ram=8192 \
|
||||||
|
--os-variant=fedora41 \
|
||||||
|
--network bridge:bridge0 \
|
||||||
|
--graphics none \
|
||||||
|
--console pty,target.type=virtio \
|
||||||
|
--name "fedora" \
|
||||||
|
--import --disk "path=/srv/vm/pool1/fedora-boot.raw,bus=virtio"
|
||||||
|
```
|
||||||
|
|
||||||
## Disk Mounts
|
## Disk Mounts
|
||||||
|
|
||||||
1. All btrfs `subvolid=5` volumes should be mounted under `/btrfs`
|
1. All btrfs `subvolid=5` volumes should be mounted under `/btrfs`
|
||||||
@@ -102,7 +132,6 @@ qemu-img convert -f qcow2 -O raw \
|
|||||||
|
|
||||||
# Install (Change password for default user ducoterra!)
|
# Install (Change password for default user ducoterra!)
|
||||||
virt-install \
|
virt-install \
|
||||||
--name "gitlab" \
|
|
||||||
--boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \
|
--boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \
|
||||||
--cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \
|
--cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \
|
||||||
--ram=8192 \
|
--ram=8192 \
|
||||||
@@ -110,8 +139,13 @@ virt-install \
|
|||||||
--network bridge:bridge0 \
|
--network bridge:bridge0 \
|
||||||
--graphics none \
|
--graphics none \
|
||||||
--console pty,target.type=virtio \
|
--console pty,target.type=virtio \
|
||||||
|
--name "fedora" \
|
||||||
--import --disk "path=/srv/vm/pool1/fedora-boot.raw,bus=virtio"
|
--import --disk "path=/srv/vm/pool1/fedora-boot.raw,bus=virtio"
|
||||||
|
|
||||||
|
# If you need to pass through a PCIe card
|
||||||
|
--hostdev pci_0000_4e_00_0 \
|
||||||
|
--hostdev pci_0000_4e_00_1
|
||||||
|
|
||||||
# convert a cloud-init image to raw
|
# convert a cloud-init image to raw
|
||||||
qemu-img convert -f qcow2 -O raw \
|
qemu-img convert -f qcow2 -O raw \
|
||||||
/srv/smb/ducoterra/images/cloud/Fedora-Cloud-Base-Generic-43-1.6.x86_64.qcow2 \
|
/srv/smb/ducoterra/images/cloud/Fedora-Cloud-Base-Generic-43-1.6.x86_64.qcow2 \
|
||||||
|
|||||||
BIN
active/device_esphome/images/va_idle.png
Normal file
BIN
active/device_esphome/images/va_idle.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 277 KiB |
BIN
active/device_esphome/images/va_idle.xcf
Normal file
BIN
active/device_esphome/images/va_idle.xcf
Normal file
Binary file not shown.
BIN
active/device_esphome/images/va_listen.png
Normal file
BIN
active/device_esphome/images/va_listen.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 431 KiB |
BIN
active/device_esphome/images/va_listen.xcf
Normal file
BIN
active/device_esphome/images/va_listen.xcf
Normal file
Binary file not shown.
BIN
active/device_esphome/images/va_speak.png
Normal file
BIN
active/device_esphome/images/va_speak.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 430 KiB |
BIN
active/device_esphome/images/va_speak.xcf
Normal file
BIN
active/device_esphome/images/va_speak.xcf
Normal file
Binary file not shown.
425
active/device_esphome/tab5.yaml
Normal file
425
active/device_esphome/tab5.yaml
Normal file
@@ -0,0 +1,425 @@
|
|||||||
|
esphome:
|
||||||
|
name: tab1
|
||||||
|
friendly_name: M5Stack Tab5 1
|
||||||
|
|
||||||
|
esp32:
|
||||||
|
board: esp32-p4-evboard
|
||||||
|
flash_size: 16MB
|
||||||
|
framework:
|
||||||
|
type: esp-idf
|
||||||
|
advanced:
|
||||||
|
enable_idf_experimental_features: true
|
||||||
|
|
||||||
|
esp32_hosted:
|
||||||
|
variant: esp32c6
|
||||||
|
active_high: true
|
||||||
|
clk_pin: GPIO12
|
||||||
|
cmd_pin: GPIO13
|
||||||
|
d0_pin: GPIO11
|
||||||
|
d1_pin: GPIO10
|
||||||
|
d2_pin: GPIO9
|
||||||
|
d3_pin: GPIO8
|
||||||
|
reset_pin: GPIO15
|
||||||
|
slot: 1
|
||||||
|
|
||||||
|
logger:
|
||||||
|
hardware_uart: USB_SERIAL_JTAG
|
||||||
|
|
||||||
|
psram:
|
||||||
|
mode: hex
|
||||||
|
speed: 200MHz
|
||||||
|
|
||||||
|
api:
|
||||||
|
|
||||||
|
# Touchscreen support
|
||||||
|
external_components:
|
||||||
|
- source: github://pr#12075
|
||||||
|
components: [st7123]
|
||||||
|
refresh: 1h
|
||||||
|
|
||||||
|
ota:
|
||||||
|
platform: esphome
|
||||||
|
|
||||||
|
wifi:
|
||||||
|
ssid: !secret wifi_ssid
|
||||||
|
password: !secret wifi_password
|
||||||
|
on_connect:
|
||||||
|
- lvgl.label.update:
|
||||||
|
id: lbl_status
|
||||||
|
text: "IDLE"
|
||||||
|
- select.set:
|
||||||
|
id: dac_output
|
||||||
|
option: "LINE1"
|
||||||
|
on_disconnect:
|
||||||
|
- lvgl.label.update:
|
||||||
|
id: lbl_status
|
||||||
|
text: "DISCONNECTED"
|
||||||
|
|
||||||
|
i2c:
|
||||||
|
- id: bsp_bus
|
||||||
|
sda: GPIO31
|
||||||
|
scl: GPIO32
|
||||||
|
frequency: 400kHz
|
||||||
|
|
||||||
|
pi4ioe5v6408:
|
||||||
|
- id: pi4ioe1
|
||||||
|
address: 0x43
|
||||||
|
# 0: O - wifi_antenna_int_ext
|
||||||
|
# 1: O - speaker_enable
|
||||||
|
# 2: O - external_5v_power
|
||||||
|
# 3: NC
|
||||||
|
# 4: O - lcd reset
|
||||||
|
# 5: O - touch panel reset
|
||||||
|
# 6: O - camera reset
|
||||||
|
# 7: I - headphone detect
|
||||||
|
- id: pi4ioe2
|
||||||
|
address: 0x44
|
||||||
|
# 0: O - wifi_power
|
||||||
|
# 1: NC
|
||||||
|
# 2: NC
|
||||||
|
# 3: O - usb_5v_power
|
||||||
|
# 4: O - poweroff pulse
|
||||||
|
# 5: O - quick charge enable (inverted)
|
||||||
|
# 6: I - charging status
|
||||||
|
# 7: O - charge enable
|
||||||
|
|
||||||
|
button:
|
||||||
|
- platform: restart
|
||||||
|
name: "Restart Tablet"
|
||||||
|
|
||||||
|
switch:
|
||||||
|
- platform: gpio
|
||||||
|
id: wifi_power
|
||||||
|
name: "WiFi Power"
|
||||||
|
pin:
|
||||||
|
pi4ioe5v6408: pi4ioe2
|
||||||
|
number: 0
|
||||||
|
restore_mode: ALWAYS_ON
|
||||||
|
- platform: gpio
|
||||||
|
id: usb_5v_power
|
||||||
|
name: "USB Power"
|
||||||
|
pin:
|
||||||
|
pi4ioe5v6408: pi4ioe2
|
||||||
|
number: 3
|
||||||
|
- platform: gpio
|
||||||
|
id: quick_charge
|
||||||
|
name: "Quick Charge"
|
||||||
|
pin:
|
||||||
|
pi4ioe5v6408: pi4ioe2
|
||||||
|
number: 5
|
||||||
|
inverted: true
|
||||||
|
- platform: gpio
|
||||||
|
id: charge_enable
|
||||||
|
name: "Charge Enable"
|
||||||
|
pin:
|
||||||
|
pi4ioe5v6408: pi4ioe2
|
||||||
|
number: 7
|
||||||
|
restore_mode: ALWAYS_ON
|
||||||
|
- platform: gpio
|
||||||
|
id: wifi_antenna_int_ext
|
||||||
|
pin:
|
||||||
|
pi4ioe5v6408: pi4ioe1
|
||||||
|
number: 0
|
||||||
|
- platform: gpio
|
||||||
|
id: speaker_enable
|
||||||
|
name: "Speaker Enable"
|
||||||
|
pin:
|
||||||
|
pi4ioe5v6408: pi4ioe1
|
||||||
|
number: 1
|
||||||
|
restore_mode: ALWAYS_ON
|
||||||
|
- platform: gpio
|
||||||
|
id: external_5v_power
|
||||||
|
name: "External 5V Power"
|
||||||
|
pin:
|
||||||
|
pi4ioe5v6408: pi4ioe1
|
||||||
|
number: 2
|
||||||
|
|
||||||
|
binary_sensor:
|
||||||
|
- platform: gpio
|
||||||
|
id: charging
|
||||||
|
name: "Charging Status"
|
||||||
|
pin:
|
||||||
|
pi4ioe5v6408: pi4ioe2
|
||||||
|
number: 6
|
||||||
|
mode: INPUT_PULLDOWN
|
||||||
|
|
||||||
|
- platform: gpio
|
||||||
|
id: headphone_detect
|
||||||
|
name: "Headphone Detect"
|
||||||
|
pin:
|
||||||
|
pi4ioe5v6408: pi4ioe1
|
||||||
|
number: 7
|
||||||
|
|
||||||
|
sensor:
|
||||||
|
- platform: ina226
|
||||||
|
address: 0x41
|
||||||
|
adc_averaging: 16
|
||||||
|
max_current: 8.192A
|
||||||
|
shunt_resistance: 0.005ohm
|
||||||
|
bus_voltage:
|
||||||
|
id: battery_voltage
|
||||||
|
name: "Battery Voltage"
|
||||||
|
current:
|
||||||
|
id: battery_current
|
||||||
|
name: "Battery Current"
|
||||||
|
# Positive means discharging
|
||||||
|
# Negative means charging
|
||||||
|
|
||||||
|
# Tab5 built-in battery discharges from full (8.23 V) to shutdown threshold (6.0 V)
|
||||||
|
- platform: template
|
||||||
|
name: "Battery Percentage"
|
||||||
|
lambda: |-
|
||||||
|
float voltage = id(battery_voltage).state;
|
||||||
|
// Adjust these values based on your battery's actual min/max voltage
|
||||||
|
float min_voltage = 6.75; // Discharged voltage
|
||||||
|
float max_voltage = 8.2; // Fully charged voltage
|
||||||
|
float percentage = (voltage - min_voltage) / (max_voltage - min_voltage) * 100.0;
|
||||||
|
if (percentage > 100.0) return 100.0;
|
||||||
|
if (percentage < 0.0) return 0.0;
|
||||||
|
return percentage;
|
||||||
|
update_interval: 60s
|
||||||
|
unit_of_measurement: "%"
|
||||||
|
accuracy_decimals: 1
|
||||||
|
id: battery_percent
|
||||||
|
on_value:
|
||||||
|
then:
|
||||||
|
- lvgl.label.update:
|
||||||
|
id: lbl_battery
|
||||||
|
text:
|
||||||
|
format: "Battery: %.1f%"
|
||||||
|
args: ["id(battery_percent).state"]
|
||||||
|
|
||||||
|
touchscreen:
|
||||||
|
- platform: st7123
|
||||||
|
i2c_id: bsp_bus
|
||||||
|
interrupt_pin: GPIO23
|
||||||
|
display: lcd
|
||||||
|
update_interval: never
|
||||||
|
reset_pin:
|
||||||
|
pi4ioe5v6408: pi4ioe1
|
||||||
|
number: 5
|
||||||
|
calibration:
|
||||||
|
x_min: 0
|
||||||
|
x_max: 720
|
||||||
|
y_min: 0
|
||||||
|
y_max: 1280
|
||||||
|
id: touch
|
||||||
|
on_touch:
|
||||||
|
- logger.log: "LVGL resuming"
|
||||||
|
- lvgl.resume:
|
||||||
|
- light.turn_on: backlight
|
||||||
|
on_release:
|
||||||
|
- media_player.stop:
|
||||||
|
|
||||||
|
esp_ldo:
|
||||||
|
- voltage: 2.5V
|
||||||
|
channel: 3
|
||||||
|
|
||||||
|
display:
|
||||||
|
- platform: mipi_dsi
|
||||||
|
id: lcd
|
||||||
|
dimensions:
|
||||||
|
height: 1280
|
||||||
|
width: 720
|
||||||
|
model: M5STACK-TAB5-V2
|
||||||
|
reset_pin:
|
||||||
|
pi4ioe5v6408: pi4ioe1
|
||||||
|
number: 4
|
||||||
|
|
||||||
|
output:
|
||||||
|
- platform: ledc
|
||||||
|
pin: GPIO22
|
||||||
|
id: backlight_pwm
|
||||||
|
frequency: 1000Hz
|
||||||
|
|
||||||
|
light:
|
||||||
|
- platform: monochromatic
|
||||||
|
output: backlight_pwm
|
||||||
|
name: "Display Backlight"
|
||||||
|
id: backlight
|
||||||
|
restore_mode: ALWAYS_ON
|
||||||
|
default_transition_length: 250ms
|
||||||
|
initial_state:
|
||||||
|
brightness: "100%"
|
||||||
|
|
||||||
|
image:
|
||||||
|
defaults:
|
||||||
|
type: rgb565
|
||||||
|
transparency: alpha_channel
|
||||||
|
resize: 512x512
|
||||||
|
byte_order: little_endian
|
||||||
|
images:
|
||||||
|
- file: "images/va_idle.png"
|
||||||
|
id: va_idle
|
||||||
|
- file: "images/va_listen.png"
|
||||||
|
id: va_listen
|
||||||
|
- file: "images/va_speak.png"
|
||||||
|
id: va_speak
|
||||||
|
|
||||||
|
lvgl:
|
||||||
|
byte_order: little_endian
|
||||||
|
|
||||||
|
on_idle:
|
||||||
|
timeout: 120s
|
||||||
|
then:
|
||||||
|
- logger.log: "LVGL is idle"
|
||||||
|
- light.turn_off:
|
||||||
|
id: backlight
|
||||||
|
transition_length: 15s
|
||||||
|
- lvgl.pause:
|
||||||
|
widgets:
|
||||||
|
- image:
|
||||||
|
id: listen_icon_widget
|
||||||
|
src: va_idle
|
||||||
|
align: CENTER
|
||||||
|
- label:
|
||||||
|
align: TOP_MID
|
||||||
|
id: lbl_status
|
||||||
|
text_font: montserrat_48
|
||||||
|
text: "CONNECTING..."
|
||||||
|
- label:
|
||||||
|
align: BOTTOM_LEFT
|
||||||
|
id: lbl_version
|
||||||
|
text_font: montserrat_12
|
||||||
|
text: "v0.5"
|
||||||
|
- label:
|
||||||
|
align: BOTTOM_RIGHT
|
||||||
|
id: lbl_battery
|
||||||
|
text_font: montserrat_28
|
||||||
|
text: Loading...
|
||||||
|
|
||||||
|
# The DAC Output select needs to be manually (or with an automation) changed to `LINE1` for the onboard speaker
|
||||||
|
select:
|
||||||
|
- platform: es8388
|
||||||
|
dac_output:
|
||||||
|
name: DAC Output
|
||||||
|
id: dac_output
|
||||||
|
adc_input_mic:
|
||||||
|
name: ADC Input Mic
|
||||||
|
id: adc_input
|
||||||
|
|
||||||
|
- platform: template
|
||||||
|
id: wifi_antenna_select
|
||||||
|
name: "WiFi Antenna"
|
||||||
|
options:
|
||||||
|
- "Internal"
|
||||||
|
- "External"
|
||||||
|
optimistic: true
|
||||||
|
on_value:
|
||||||
|
- if:
|
||||||
|
condition:
|
||||||
|
lambda: return i == 0;
|
||||||
|
then:
|
||||||
|
- switch.turn_off: wifi_antenna_int_ext
|
||||||
|
else:
|
||||||
|
- switch.turn_on: wifi_antenna_int_ext
|
||||||
|
|
||||||
|
i2s_audio:
|
||||||
|
- id: mic_bus
|
||||||
|
i2s_lrclk_pin: GPIO29
|
||||||
|
i2s_bclk_pin: GPIO27
|
||||||
|
i2s_mclk_pin: GPIO30
|
||||||
|
|
||||||
|
audio_adc:
|
||||||
|
- platform: es7210
|
||||||
|
id: es7210_adc
|
||||||
|
bits_per_sample: 16bit
|
||||||
|
sample_rate: 16000
|
||||||
|
|
||||||
|
microphone:
|
||||||
|
- platform: i2s_audio
|
||||||
|
id: tab5_microphone
|
||||||
|
i2s_din_pin: GPIO28
|
||||||
|
sample_rate: 16000
|
||||||
|
bits_per_sample: 16bit
|
||||||
|
adc_type: external
|
||||||
|
|
||||||
|
audio_dac:
|
||||||
|
- platform: es8388
|
||||||
|
id: es8388_dac
|
||||||
|
|
||||||
|
speaker:
|
||||||
|
- platform: i2s_audio
|
||||||
|
id: tab5_speaker
|
||||||
|
i2s_dout_pin: GPIO26
|
||||||
|
audio_dac: es8388_dac
|
||||||
|
dac_type: external
|
||||||
|
channel: mono
|
||||||
|
buffer_duration: 100ms
|
||||||
|
bits_per_sample: 16bit
|
||||||
|
sample_rate: 48000
|
||||||
|
|
||||||
|
media_player:
|
||||||
|
- platform: speaker
|
||||||
|
name: None
|
||||||
|
id: tab5_media_player
|
||||||
|
announcement_pipeline:
|
||||||
|
speaker: tab5_speaker
|
||||||
|
format: WAV
|
||||||
|
|
||||||
|
micro_wake_word:
|
||||||
|
id: mww
|
||||||
|
models:
|
||||||
|
- okay_nabu
|
||||||
|
- hey_mycroft
|
||||||
|
- hey_jarvis
|
||||||
|
on_wake_word_detected:
|
||||||
|
- voice_assistant.start:
|
||||||
|
wake_word: !lambda return wake_word;
|
||||||
|
|
||||||
|
voice_assistant:
|
||||||
|
id: va
|
||||||
|
microphone: tab5_microphone
|
||||||
|
media_player: tab5_media_player
|
||||||
|
micro_wake_word: mww
|
||||||
|
on_listening:
|
||||||
|
- logger.log: "LVGL resuming"
|
||||||
|
- lvgl.resume:
|
||||||
|
- light.turn_on: backlight
|
||||||
|
- lvgl.image.update:
|
||||||
|
id: listen_icon_widget
|
||||||
|
src: va_listen
|
||||||
|
- lvgl.label.update:
|
||||||
|
id: lbl_status
|
||||||
|
text: "LISTENING"
|
||||||
|
on_stt_vad_end:
|
||||||
|
- lvgl.label.update:
|
||||||
|
id: lbl_status
|
||||||
|
text: "PROCESSING"
|
||||||
|
- lvgl.image.update:
|
||||||
|
id: listen_icon_widget
|
||||||
|
src: va_idle
|
||||||
|
on_tts_start:
|
||||||
|
- lvgl.label.update:
|
||||||
|
id: lbl_status
|
||||||
|
text: "RESPONDING"
|
||||||
|
- lvgl.image.update:
|
||||||
|
id: listen_icon_widget
|
||||||
|
src: va_speak
|
||||||
|
on_end:
|
||||||
|
# Wait a short amount of time to see if an announcement starts
|
||||||
|
- wait_until:
|
||||||
|
condition:
|
||||||
|
- media_player.is_announcing:
|
||||||
|
timeout: 0.5s
|
||||||
|
# Announcement is finished and the I2S bus is free
|
||||||
|
- wait_until:
|
||||||
|
- and:
|
||||||
|
- not:
|
||||||
|
media_player.is_announcing:
|
||||||
|
- not:
|
||||||
|
speaker.is_playing:
|
||||||
|
- micro_wake_word.start:
|
||||||
|
- lvgl.label.update:
|
||||||
|
id: lbl_status
|
||||||
|
text: "IDLE"
|
||||||
|
- lvgl.image.update:
|
||||||
|
id: listen_icon_widget
|
||||||
|
src: va_idle
|
||||||
|
- light.turn_off:
|
||||||
|
id: backlight
|
||||||
|
transition_length: 15s
|
||||||
|
on_client_connected:
|
||||||
|
- micro_wake_word.start:
|
||||||
|
on_client_disconnected:
|
||||||
|
- micro_wake_word.stop:
|
||||||
@@ -6,36 +6,7 @@
|
|||||||
- [Notes](#notes)
|
- [Notes](#notes)
|
||||||
- [Firmware and Kernel](#firmware-and-kernel)
|
- [Firmware and Kernel](#firmware-and-kernel)
|
||||||
- [Kernel args](#kernel-args)
|
- [Kernel args](#kernel-args)
|
||||||
- [Volume Locations](#volume-locations)
|
- [AI](#ai)
|
||||||
- [Setup](#setup)
|
|
||||||
- [Create the AI user](#create-the-ai-user)
|
|
||||||
- [Helper aliases](#helper-aliases)
|
|
||||||
- [Create the models dir](#create-the-models-dir)
|
|
||||||
- [Install the Hugging Face CLI](#install-the-hugging-face-cli)
|
|
||||||
- [Samba Model Storage](#samba-model-storage)
|
|
||||||
- [Download models](#download-models)
|
|
||||||
- [Text models](#text-models)
|
|
||||||
- [GPT-OSS](#gpt-oss)
|
|
||||||
- [Mistral](#mistral)
|
|
||||||
- [Nemotron](#nemotron)
|
|
||||||
- [Qwen](#qwen)
|
|
||||||
- [GLM](#glm)
|
|
||||||
- [Llama](#llama)
|
|
||||||
- [Gemma](#gemma)
|
|
||||||
- [Dolphin (Abliterated)](#dolphin-abliterated)
|
|
||||||
- [Image models](#image-models)
|
|
||||||
- [Z-Image](#z-image)
|
|
||||||
- [Flux](#flux)
|
|
||||||
- [Qwen Image 2512](#qwen-image-2512)
|
|
||||||
- [Embedding Models](#embedding-models)
|
|
||||||
- [Nomic](#nomic)
|
|
||||||
- [llama.cpp](#llamacpp)
|
|
||||||
- [stable-diffusion.cpp](#stable-diffusioncpp)
|
|
||||||
- [open-webui](#open-webui)
|
|
||||||
- [VLLM](#vllm)
|
|
||||||
- [Install the whole thing with quadlets (TM)](#install-the-whole-thing-with-quadlets-tm)
|
|
||||||
- [Install the update script](#install-the-update-script)
|
|
||||||
- [Install Guest Open Webui with Start/Stop Services](#install-guest-open-webui-with-startstop-services)
|
|
||||||
|
|
||||||
## BIOS
|
## BIOS
|
||||||
|
|
||||||
@@ -65,517 +36,6 @@ amd_iommu=off amdgpu.gttsize=126976 ttm.pages_limit=32505856
|
|||||||
|
|
||||||
Then `grub2-mkconfig -o /boot/grub2/grub.cfg` and `reboot`.
|
Then `grub2-mkconfig -o /boot/grub2/grub.cfg` and `reboot`.
|
||||||
|
|
||||||
### Volume Locations
|
## AI
|
||||||
|
|
||||||
`~/.local/share/containers/storage/volumes/`
|
See [Self Hosted AI Stack](/active/software_ai_stack/ai_stack.md)
|
||||||
|
|
||||||
## Setup
|
|
||||||
|
|
||||||
### Create the AI user
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create your local ai user. This will be the user you launch podman processes from.
|
|
||||||
useradd -m ai
|
|
||||||
loginctl enable-linger ai
|
|
||||||
su -l ai
|
|
||||||
mkdir -p /home/ai/.config/containers/systemd/
|
|
||||||
mkdir -p /home/ai/.ssh
|
|
||||||
```
|
|
||||||
|
|
||||||
Models are big. You'll want some tools to help find large files quickly when space runs out.
|
|
||||||
|
|
||||||
### Helper aliases
|
|
||||||
|
|
||||||
Add these to your .bashrc:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Calculate all folder sizes in current dir
|
|
||||||
alias {dudir,dud}='du -h --max-depth 1 | sort -h'
|
|
||||||
|
|
||||||
# Calculate all file sizes in current dir
|
|
||||||
alias {dufile,duf}='ls -lhSr'
|
|
||||||
|
|
||||||
# Restart llama-server / follow logs
|
|
||||||
alias llama-reload="systemctl --user daemon-reload && systemctl --user restart llama-server.service"
|
|
||||||
alias llama-logs="journalctl --user -fu llama-server"
|
|
||||||
|
|
||||||
# Restart stable diffusion gen and edit server / follow logs
|
|
||||||
alias sd-gen-reload='systemctl --user daemon-reload && systemctl --user restart stable-diffusion-gen-server'
|
|
||||||
alias sd-gen-logs='journalctl --user -xeu stable-diffusion-gen-server'
|
|
||||||
alias sd-edit-reload='systemctl --user daemon-reload && systemctl --user restart stable-diffusion-edit-server'
|
|
||||||
alias sd-edit-logs='journalctl --user -xeu stable-diffusion-edit-server'
|
|
||||||
```
|
|
||||||
|
|
||||||
### Create the models dir
|
|
||||||
|
|
||||||
```bash
|
|
||||||
mkdir -p /home/ai/models/{text,image,video,embedding,tts,stt}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Install the Hugging Face CLI
|
|
||||||
|
|
||||||
<https://huggingface.co/docs/huggingface_hub/en/guides/cli#getting-started>
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Install
|
|
||||||
curl -LsSf https://hf.co/cli/install.sh | bash
|
|
||||||
|
|
||||||
# Login
|
|
||||||
hf auth login
|
|
||||||
```
|
|
||||||
|
|
||||||
### Samba Model Storage
|
|
||||||
|
|
||||||
I recommend adding network storage for keeping models offloaded. This mounts a samba share at `/srv/models`.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Add this to /etc/fstab
|
|
||||||
//driveripper.reeselink.com/smb_models /srv/models cifs _netdev,nofail,uid=1001,gid=1001,credentials=/etc/samba/credentials 0 0
|
|
||||||
|
|
||||||
# Then mount
|
|
||||||
systemctl daemon-reload
|
|
||||||
mount -a --mkdir
|
|
||||||
```
|
|
||||||
|
|
||||||
Here are some sync commands that I use to keep the samba share in sync with the home directory:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Sync models from home dir to the samba share
|
|
||||||
rsync -av --progress /home/ai/models/ /srv/models/
|
|
||||||
```
|
|
||||||
|
|
||||||
### Download models
|
|
||||||
|
|
||||||
#### Text models
|
|
||||||
|
|
||||||
<https://huggingface.co/ggml-org/collections>
|
|
||||||
|
|
||||||
##### GPT-OSS
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# gpt-oss-120b
|
|
||||||
mkdir /home/ai/models/text/gpt-oss-120b
|
|
||||||
hf download --local-dir /home/ai/models/text/gpt-oss-120b ggml-org/gpt-oss-120b-GGUF
|
|
||||||
|
|
||||||
# gpt-oss-20b
|
|
||||||
mkdir /home/ai/models/text/gpt-oss-20b
|
|
||||||
hf download --local-dir /home/ai/models/text/gpt-oss-20b ggml-org/gpt-oss-20b-GGUF
|
|
||||||
```
|
|
||||||
|
|
||||||
##### Mistral
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# devstral-2-123b
|
|
||||||
mkdir /home/ai/models/text/devstral-2-123b
|
|
||||||
hf download --local-dir /home/ai/models/text/devstral-2-123b unsloth/Devstral-2-123B-Instruct-2512-GGUF Q4_K_M/Devstral-2-123B-Instruct-2512-Q4_K_M-00001-of-00002.gguf
|
|
||||||
hf download --local-dir /home/ai/models/text/devstral-2-123b unsloth/Devstral-2-123B-Instruct-2512-GGUF Q4_K_M/Devstral-2-123B-Instruct-2512-Q4_K_M-00002-of-00002.gguf
|
|
||||||
|
|
||||||
# devstral-small-2-24b
|
|
||||||
mkdir /home/ai/models/text/devstral-small-2-24b
|
|
||||||
hf download --local-dir /home/ai/models/text/devstral-small-2-24b unsloth/Devstral-Small-2-24B-Instruct-2512-GGUF Devstral-Small-2-24B-Instruct-2512-Q4_K_M.gguf
|
|
||||||
|
|
||||||
# ministral-3-14b
|
|
||||||
mkdir /home/ai/models/text/ministral-3-14b
|
|
||||||
hf download --local-dir /home/ai/models/text/ministral-3-14b ggml-org/Ministral-3-14B-Reasoning-2512-GGUF
|
|
||||||
|
|
||||||
# ministral-3-3b-instruct
|
|
||||||
mkdir /home/ai/models/text/ministral-3-3b-instruct
|
|
||||||
hf download --local-dir /home/ai/models/text/ministral-3-3b-instruct ggml-org/Ministral-3-3B-Instruct-2512-GGUF
|
|
||||||
```
|
|
||||||
|
|
||||||
##### Nemotron
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# nemotron-nano-30b
|
|
||||||
mkdir /home/ai/models/text/nemotron-nano-30b
|
|
||||||
hf download --local-dir /home/ai/models/text/nemotron-nano-30b ggml-org/Nemotron-Nano-3-30B-A3B-GGUF Nemotron-Nano-3-30B-A3B-Q4_K_M.gguf
|
|
||||||
```
|
|
||||||
|
|
||||||
##### Qwen
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# qwen3-30b-a3b-thinking
|
|
||||||
mkdir /home/ai/models/text/qwen3-30b-a3b-thinking
|
|
||||||
hf download --local-dir /home/ai/models/text/qwen3-30b-a3b-thinking ggml-org/Qwen3-30B-A3B-GGUF Qwen3-30B-A3B-Q4_K_M.gguf
|
|
||||||
|
|
||||||
# qwen3-30b-a3b-instruct
|
|
||||||
mkdir /home/ai/models/text/qwen3-30b-a3b-instruct
|
|
||||||
hf download --local-dir /home/ai/models/text/qwen3-30b-a3b-instruct ggml-org/Qwen3-30B-A3B-Instruct-2507-Q8_0-GGUF
|
|
||||||
|
|
||||||
# qwen3-coder-30b-a3b-instruct
|
|
||||||
mkdir /home/ai/models/text/qwen3-coder-30b-a3b-instruct
|
|
||||||
hf download --local-dir /home/ai/models/text/qwen3-coder-30b-a3b-instruct ggml-org/Qwen3-Coder-30B-A3B-Instruct-Q8_0-GGUF
|
|
||||||
|
|
||||||
# qwen3-coder-next
|
|
||||||
mkdir /home/ai/models/text/qwen3-coder-next
|
|
||||||
hf download --local-dir /home/ai/models/text/qwen3-coder-next unsloth/Qwen3-Coder-Next-GGUF --include " 5_K_M/*.gguf"
|
|
||||||
|
|
||||||
# qwen3-vl-30b-thinking
|
|
||||||
mkdir /home/ai/models/text/qwen3-vl-30b-thinking
|
|
||||||
hf download --local-dir /home/ai/models/text/qwen3-vl-30b-thinking unsloth/Qwen3-VL-30B-A3B-Thinking-1M-GGUF Qwen3-VL-30B-A3B-Thinking-1M-Q4_K_M.gguf
|
|
||||||
hf download --local-dir /home/ai/models/text/qwen3-vl-30b-thinking unsloth/Qwen3-VL-30B-A3B-Thinking-1M-GGUF mmproj-F16.gguf
|
|
||||||
|
|
||||||
# qwen3-vl-8b-instruct
|
|
||||||
mkdir /home/ai/models/text/qwen3-vl-8b-instruct
|
|
||||||
hf download --local-dir /home/ai/models/text/qwen3-vl-8b-instruct Qwen/Qwen3-VL-8B-Instruct-GGUF Qwen3VL-8B-Instruct-Q4_K_M.gguf
|
|
||||||
hf download --local-dir /home/ai/models/text/qwen3-vl-8b-instruct Qwen/Qwen3-VL-8B-Instruct-GGUF mmproj-Qwen3VL-8B-Instruct-Q8_0.gguf
|
|
||||||
|
|
||||||
# qwen3-4b-2507-abliterated
|
|
||||||
mkdir /home/ai/models/text/qwen3-4b-2507-abliterated
|
|
||||||
hf download --local-dir /home/ai/models/text/qwen3-4b-2507-abliterated prithivMLmods/Qwen3-4B-2507-abliterated-GGUF Qwen3-4B-Thinking-2507-abliterated-GGUF/Qwen3-4B-Thinking-2507-abliterated.Q4_K_M.gguf
|
|
||||||
|
|
||||||
# qwen3-48b-a4b-abliterated
|
|
||||||
mkdir /home/ai/models/text/qwen3-48b-a4b-abliterated
|
|
||||||
hf download --local-dir /home/ai/models/text/qwen3-48b-a4b-abliterated DavidAU/Qwen3-48B-A4B-Savant-Commander-Distill-12X-Closed-Open-Heretic-Uncensored-GGUF Qwen3-48B-A4B-Savant-Commander-Dstll-12X-Cl-Op-Hrtic-Uncen-Q4_K_M.gguf
|
|
||||||
```
|
|
||||||
|
|
||||||
##### GLM
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# glm-4.7-flash-30b
|
|
||||||
mkdir /home/ai/models/text/glm-4.7-flash-30b
|
|
||||||
hf download --local-dir /home/ai/models/text/glm-4.7-flash-30b unsloth/GLM-4.7-Flash-GGUF GLM-4.7-Flash-Q4_K_M.gguf
|
|
||||||
|
|
||||||
# glm-4.6v
|
|
||||||
mkdir /home/ai/models/text/glm-4.6v
|
|
||||||
hf download --local-dir /home/ai/models/text/glm-4.6v unsloth/GLM-4.6V-GGUF --include "Q4_K_M/*.gguf"
|
|
||||||
hf download --local-dir /home/ai/models/text/glm-4.6v unsloth/GLM-4.6V-GGUF mmproj-F16.gguf
|
|
||||||
|
|
||||||
# glm-4.6v-flash
|
|
||||||
mkdir /home/ai/models/text/glm-4.6v-flash
|
|
||||||
hf download --local-dir /home/ai/models/text/glm-4.6v-flash unsloth/GLM-4.6V-Flash-GGUF GLM-4.6V-Flash-Q4_K_M.gguf
|
|
||||||
hf download --local-dir /home/ai/models/text/glm-4.6v-flash unsloth/GLM-4.6V-Flash-GGUF mmproj-F16.gguf
|
|
||||||
```
|
|
||||||
|
|
||||||
##### Llama
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# llama4-scout
|
|
||||||
mkdir /home/ai/models/text/llama4-scout
|
|
||||||
# Remember to move the gguf files into the llama4-scout folder, otherwise it won't pick up
|
|
||||||
hf download --local-dir /home/ai/models/text/llama4-scout unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF --include "Q4_K_M/*.gguf"
|
|
||||||
hf download --local-dir /home/ai/models/text/llama4-scout unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF mmproj-F16.gguf
|
|
||||||
```
|
|
||||||
|
|
||||||
##### Gemma
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Note "it" vs "pt" suffixes. "it" is instruction following, "pt" is the base model (not as good for out-of-the-box use)
|
|
||||||
# gemma-3-27b-it
|
|
||||||
mkdir /home/ai/models/text/gemma-3-27b-it
|
|
||||||
hf download --local-dir /home/ai/models/text/gemma-3-27b-it unsloth/gemma-3-27b-it-GGUF gemma-3-27b-it-Q4_K_M.gguf
|
|
||||||
hf download --local-dir /home/ai/models/text/gemma-3-27b-it unsloth/gemma-3-27b-it-GGUF mmproj-F16.gguf
|
|
||||||
```
|
|
||||||
|
|
||||||
##### Dolphin (Abliterated)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# dolphin-x1-8b
|
|
||||||
mkdir /home/ai/models/text/dolphin-x1-8b
|
|
||||||
hf download --local-dir /home/ai/models/text/dolphin-x1-8b dphn/Dolphin-X1-8B-GGUF Dolphin-X1-8B-Q4_K_M.gguf
|
|
||||||
|
|
||||||
# dolphin-mistral-24b-venice
|
|
||||||
mkdir /home/ai/models/text/dolphin-mistral-24b-venice
|
|
||||||
hf download --local-dir /home/ai/models/text/dolphin-mistral-24b-venice bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-Q4_K_M.gguf
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Image models
|
|
||||||
|
|
||||||
##### Z-Image
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# z-turbo
|
|
||||||
# Fastest image generation in 8 steps. Great a text and prompt following.
|
|
||||||
# Lacks variety.
|
|
||||||
mkdir /home/ai/models/image/z-turbo
|
|
||||||
hf download --local-dir /home/ai/models/image/z-turbo QuantStack/FLUX.1-Kontext-dev-GGUF flux1-kontext-dev-Q4_K_M.gguf
|
|
||||||
hf download --local-dir /home/ai/models/image/z-turbo black-forest-labs/FLUX.1-schnell ae.safetensors
|
|
||||||
hf download --local-dir /home/ai/models/image/z-turbo unsloth/Qwen3-4B-Instruct-2507-GGUF Qwen3-4B-Instruct-2507-Q4_K_M.gguf
|
|
||||||
|
|
||||||
# z-image
|
|
||||||
# Full version of z-turbo. Needs 28-50 steps.
|
|
||||||
# Note, image quality not as good as z-turbo
|
|
||||||
mkdir /home/ai/models/image/z-image
|
|
||||||
hf download --local-dir /home/ai/models/image/z-image unsloth/Z-Image-GGUF z-image-Q4_K_M.gguf
|
|
||||||
hf download --local-dir /home/ai/models/image/z-image black-forest-labs/FLUX.1-schnell ae.safetensors
|
|
||||||
hf download --local-dir /home/ai/models/image/z-image unsloth/Qwen3-4B-Instruct-2507-GGUF Qwen3-4B-Instruct-2507-Q4_K_M.gguf
|
|
||||||
```
|
|
||||||
|
|
||||||
##### Flux
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# flux2-klein
|
|
||||||
# Capable of generating images in 4 steps
|
|
||||||
mkdir /home/ai/models/image/flux2-klein
|
|
||||||
hf download --local-dir /home/ai/models/image/flux2-klein leejet/FLUX.2-klein-9B-GGUF flux-2-klein-9b-Q4_0.gguf
|
|
||||||
hf download --local-dir /home/ai/models/image/flux2-klein black-forest-labs/FLUX.2-dev ae.safetensors
|
|
||||||
hf download --local-dir /home/ai/models/image/flux2-klein unsloth/Qwen3-8B-GGUF Qwen3-8B-Q4_K_M.gguf
|
|
||||||
|
|
||||||
# flux-1-kontext
|
|
||||||
mkdir /home/ai/models/image/flux-1-kontext
|
|
||||||
hf download --local-dir /home/ai/models/image/flux-1-kontext leejet/Z-Image-Turbo-GGUF z_image_turbo-Q4_K.gguf
|
|
||||||
hf download --local-dir /home/ai/models/image/flux-1-kontext black-forest-labs/FLUX.1-dev ae.safetensors
|
|
||||||
hf download --local-dir /home/ai/models/image/flux-1-kontext comfyanonymous/flux_text_encoders clip_l.safetensors
|
|
||||||
hf download --local-dir /home/ai/models/image/flux-1-kontext comfyanonymous/flux_text_encoders t5xxl_fp16.safetensors
|
|
||||||
```
|
|
||||||
|
|
||||||
##### Qwen Image 2512
|
|
||||||
|
|
||||||
```bash
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Embedding Models
|
|
||||||
|
|
||||||
##### Nomic
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# nomic-embed-text-v2
|
|
||||||
mkdir /home/ai/models/embedding/nomic-embed-text-v2
|
|
||||||
hf download --local-dir /home/ai/models/embedding/nomic-embed-text-v2 ggml-org/Nomic-Embed-Text-V2-GGUF
|
|
||||||
```
|
|
||||||
|
|
||||||
## llama.cpp
|
|
||||||
|
|
||||||
<https://github.com/ggml-org/llama.cpp/tree/master/tools/server>
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Build the llama.cpp container image
|
|
||||||
git clone https://github.com/ggml-org/llama.cpp.git
|
|
||||||
cd llama.cpp
|
|
||||||
export BUILD_TAG=$(date +"%Y-%m-%d-%H-%M-%S")
|
|
||||||
|
|
||||||
# Vulkan
|
|
||||||
podman build -f .devops/vulkan.Dockerfile -t llama-cpp-vulkan:${BUILD_TAG} -t llama-cpp-vulkan:latest .
|
|
||||||
|
|
||||||
# ROCM
|
|
||||||
podman build -f .devops/rocm.Dockerfile -t llama-cpp-rocm:${BUILD_TAG} -t llama-cpp-rocm:latest .
|
|
||||||
|
|
||||||
# Run llama server (Available on port 8000)
|
|
||||||
# Add `--n-cpu-moe 32` to gpt-oss-120b to keep minimal number of expert in GPU
|
|
||||||
podman run \
|
|
||||||
--rm \
|
|
||||||
--name llama-server-demo \
|
|
||||||
--device=/dev/kfd \
|
|
||||||
--device=/dev/dri \
|
|
||||||
--pod systemd-ai-internal \
|
|
||||||
-v /home/ai/models/text:/models:z \
|
|
||||||
localhost/llama-cpp-vulkan:latest \
|
|
||||||
--port 8000 \
|
|
||||||
-c 32000 \
|
|
||||||
--perf \
|
|
||||||
--n-gpu-layers all \
|
|
||||||
--jinja \
|
|
||||||
--models-max 1 \
|
|
||||||
--models-dir /models
|
|
||||||
|
|
||||||
# ROCM
|
|
||||||
podman run \
|
|
||||||
--rm \
|
|
||||||
--name llama-server-demo \
|
|
||||||
--device=/dev/kfd \
|
|
||||||
--device=/dev/dri \
|
|
||||||
--pod systemd-ai-internal \
|
|
||||||
-v /home/ai/models/text:/models:z \
|
|
||||||
localhost/llama-cpp-rocm:latest \
|
|
||||||
--port 8000 \
|
|
||||||
-c 0 \
|
|
||||||
--perf \
|
|
||||||
--n-gpu-layers all \
|
|
||||||
--jinja \
|
|
||||||
--models-max 1 \
|
|
||||||
--models-dir /models
|
|
||||||
```
|
|
||||||
|
|
||||||
## stable-diffusion.cpp
|
|
||||||
|
|
||||||
Server: <https://github.com/leejet/stable-diffusion.cpp/tree/master/examples/server>
|
|
||||||
|
|
||||||
CLI: <https://github.com/leejet/stable-diffusion.cpp/tree/master/examples/cli>
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git clone https://github.com/leejet/stable-diffusion.cpp.git
|
|
||||||
cd stable-diffusion.cpp
|
|
||||||
git submodule update --init --recursive
|
|
||||||
export BUILD_TAG=$(date +"%Y-%m-%d-%H-%M-%S")
|
|
||||||
|
|
||||||
# Vulkan
|
|
||||||
podman build -f Dockerfile.vulkan -t stable-diffusion-cpp:${BUILD_TAG} -t stable-diffusion-cpp:latest .
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# z-turbo
|
|
||||||
podman run --rm \
|
|
||||||
-v /home/ai/models:/models:z \
|
|
||||||
-v /home/ai/output:/output:z \
|
|
||||||
--device /dev/kfd \
|
|
||||||
--device /dev/dri \
|
|
||||||
localhost/stable-diffusion-cpp:latest \
|
|
||||||
--diffusion-model /models/image/z-turbo/z_image_turbo-Q4_K.gguf \
|
|
||||||
--vae /models/image/z-turbo/ae.safetensors \
|
|
||||||
--llm /models/image/z-turbo/Qwen3-4B-Instruct-2507-Q4_K_M.gguf \
|
|
||||||
--cfg-scale 1.0 \
|
|
||||||
-v \
|
|
||||||
--seed -1 \
|
|
||||||
--steps 8 \
|
|
||||||
--vae-conv-direct \
|
|
||||||
-H 1024 \
|
|
||||||
-W 1024 \
|
|
||||||
-o /output/output.png \
|
|
||||||
-p "A photorealistic dragon"
|
|
||||||
|
|
||||||
# z-image
|
|
||||||
podman run --rm \
|
|
||||||
-v /home/ai/models:/models:z \
|
|
||||||
-v /home/ai/output:/output:z \
|
|
||||||
--device /dev/kfd \
|
|
||||||
--device /dev/dri \
|
|
||||||
localhost/stable-diffusion-cpp:latest \
|
|
||||||
--diffusion-model /models/image/z-image/z-image-Q4_K_M.gguf \
|
|
||||||
--vae /models/image/z-image/ae.safetensors \
|
|
||||||
--llm /models/image/z-image/Qwen3-4B-Instruct-2507-Q4_K_M.gguf \
|
|
||||||
--cfg-scale 1.0 \
|
|
||||||
-v \
|
|
||||||
--seed -1 \
|
|
||||||
--steps 28 \
|
|
||||||
--vae-conv-direct \
|
|
||||||
-H 1024 \
|
|
||||||
-W 1024 \
|
|
||||||
-o /output/output.png \
|
|
||||||
-p "A photorealistic dragon"
|
|
||||||
|
|
||||||
# flux2-klein
|
|
||||||
podman run --rm \
|
|
||||||
-v /home/ai/models:/models:z \
|
|
||||||
-v /home/ai/output:/output:z \
|
|
||||||
--device /dev/kfd \
|
|
||||||
--device /dev/dri \
|
|
||||||
localhost/stable-diffusion-cpp:latest \
|
|
||||||
--diffusion-model /models/image/flux2-klein/flux-2-klein-9b-Q4_0.gguf \
|
|
||||||
--vae /models/image/flux2-klein/ae.safetensors \
|
|
||||||
--llm /models/image/flux2-klein/Qwen3-8B-Q4_K_M.gguf \
|
|
||||||
--cfg-scale 1.0 \
|
|
||||||
--steps 4 \
|
|
||||||
-v \
|
|
||||||
--seed -1 \
|
|
||||||
--vae-conv-direct \
|
|
||||||
-H 1024 \
|
|
||||||
-W 1024 \
|
|
||||||
-o /output/output.png \
|
|
||||||
-p "A photorealistic dragon"
|
|
||||||
|
|
||||||
# Edit with flux2 klein
|
|
||||||
.\bin\Release\sd-cli.exe \
|
|
||||||
--diffusion-model /models/image/flux2-klein/flux-2-klein-9b-Q4_0.gguf \
|
|
||||||
--vae /models/image/flux2-klein/ae.safetensors \
|
|
||||||
--llm /models/image/flux2-klein/Qwen3-8B-Q4_K_M.gguf \
|
|
||||||
--cfg-scale 1.0 \
|
|
||||||
--sampling-method euler \
|
|
||||||
-v \
|
|
||||||
--vae-conv-direct \
|
|
||||||
--steps 4
|
|
||||||
-r .\kontext_input.png \
|
|
||||||
-p "change 'flux.cpp' to 'klein.cpp'" \
|
|
||||||
|
|
||||||
# Edit with flux kontext
|
|
||||||
podman run --rm \
|
|
||||||
-v /home/ai/models:/models:z \
|
|
||||||
-v /home/ai/output:/output:z \
|
|
||||||
--device /dev/kfd \
|
|
||||||
--device /dev/dri \
|
|
||||||
localhost/stable-diffusion-cpp:latest \
|
|
||||||
--diffusion-model /models/image/flux-1-kontext/flux1-kontext-dev-Q4_K_M.gguf \
|
|
||||||
--vae /models/image/flux-1-kontext/ae.safetensors \
|
|
||||||
--clip_l /models/image/flux-1-kontext/clip_l.safetensors \
|
|
||||||
--t5xxl /models/image/flux-1-kontext/t5xxl_fp16.safetensors \
|
|
||||||
--cfg-scale 1.0 \
|
|
||||||
--sampling-method euler \
|
|
||||||
--seed -1 \
|
|
||||||
--steps 28 \
|
|
||||||
--vae-conv-direct \
|
|
||||||
-v \
|
|
||||||
-H 512 \
|
|
||||||
-W 512 \
|
|
||||||
-o /output/output.png \
|
|
||||||
-r /output/everquest_logo.png \
|
|
||||||
-p "Add the text 'EverQuest'"
|
|
||||||
```
|
|
||||||
|
|
||||||
## open-webui
|
|
||||||
|
|
||||||
```bash
|
|
||||||
mkdir /home/ai/.env
|
|
||||||
# Create a file called open-webui-env with `WEBUI_SECRET_KEY="some-random-key"
|
|
||||||
scp active/device_framework_desktop/secrets/open-webui-env deskwork-ai:.env/
|
|
||||||
|
|
||||||
# Will be available on port 8080
|
|
||||||
podman run \
|
|
||||||
-d \
|
|
||||||
--pod ai \
|
|
||||||
-v open-webui:/app/backend/data \
|
|
||||||
--name open-webui \
|
|
||||||
--restart always \
|
|
||||||
ghcr.io/open-webui/open-webui:main
|
|
||||||
```
|
|
||||||
|
|
||||||
Use the following connections:
|
|
||||||
|
|
||||||
| Service | Endpoint |
|
|
||||||
| -------------------- | ----------------------------------------- |
|
|
||||||
| llama.cpp | <http://host.containers.internal:8000> |
|
|
||||||
| stable-diffusion.cpp | <http://host.containers.internal:1234/v1> |
|
|
||||||
|
|
||||||
## VLLM
|
|
||||||
|
|
||||||
```bash
|
|
||||||
--group-add=video \
|
|
||||||
--cap-add=SYS_PTRACE \
|
|
||||||
--security-opt seccomp=unconfined \
|
|
||||||
--env "HF_TOKEN=$HF_TOKEN" \
|
|
||||||
--ipc=host \
|
|
||||||
|
|
||||||
mkdir -p /home/ai/vllm/.cache/huggingface
|
|
||||||
|
|
||||||
podman run --rm \
|
|
||||||
--device /dev/kfd \
|
|
||||||
--device /dev/dri \
|
|
||||||
-v /home/ai/vllm/.cache/huggingface:/root/.cache/huggingface:z \
|
|
||||||
-p 8002:8000 \
|
|
||||||
docker.io/vllm/vllm-openai-rocm:latest \
|
|
||||||
--model Qwen/Qwen3-0.6B
|
|
||||||
```
|
|
||||||
|
|
||||||
## Install the whole thing with quadlets (TM)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Installs and runs all services in `quadlets/`
|
|
||||||
scp -r active/device_framework_desktop/quadlets/* deskwork-ai:.config/containers/systemd/
|
|
||||||
ssh deskwork-ai
|
|
||||||
systemctl --user daemon-reload
|
|
||||||
systemctl --user restart ai-internal-pod.service
|
|
||||||
```
|
|
||||||
|
|
||||||
Note, all services will be available at `host.containers.internal`. So llama.cpp
|
|
||||||
will be up at `http://host.containers.internal:8000`.
|
|
||||||
|
|
||||||
### Install the update script
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Builds the latest llama.cpp and stable-diffusion.cpp
|
|
||||||
# 2. Pulls the latest open-webui
|
|
||||||
# 3. Restarts all services
|
|
||||||
scp active/device_framework_desktop/update-script.sh deskwork-ai:
|
|
||||||
ssh deskwork-ai
|
|
||||||
chmod +x update-script.sh
|
|
||||||
./update-script.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
## Install Guest Open Webui with Start/Stop Services
|
|
||||||
|
|
||||||
```bash
|
|
||||||
scp -r active/device_framework_desktop/systemd/. deskwork-ai:.config/systemd/user/
|
|
||||||
ssh deskwork-ai
|
|
||||||
systemctl --user daemon-reload
|
|
||||||
systemctl --user enable open-webui-guest-start.timer
|
|
||||||
systemctl --user enable open-webui-guest-stop.timer
|
|
||||||
```
|
|
||||||
@@ -1,2 +0,0 @@
|
|||||||
[Network]
|
|
||||||
IPv6=true
|
|
||||||
@@ -1,6 +0,0 @@
|
|||||||
[Pod]
|
|
||||||
# ai-external is the primary network
|
|
||||||
Network=ai-external.network
|
|
||||||
Network=ai-internal.network
|
|
||||||
# open-webui
|
|
||||||
PublishPort=8080:8080/tcp
|
|
||||||
@@ -1,32 +0,0 @@
|
|||||||
[Unit]
|
|
||||||
Description=An Open Webui Frontend for Local AI Services for Guests
|
|
||||||
|
|
||||||
[Container]
|
|
||||||
# Shared AI external pod
|
|
||||||
Pod=ai-external.pod
|
|
||||||
|
|
||||||
# Open Webui base image
|
|
||||||
Image=ghcr.io/open-webui/open-webui:main
|
|
||||||
|
|
||||||
# Nothing too complicated here. Open Webui will basically configure itself.
|
|
||||||
Volume=open-webui-data-guest:/app/backend/data
|
|
||||||
|
|
||||||
# WEBUI_SECRET_KEY is required to prevent logout on Restart
|
|
||||||
EnvironmentFile=/home/ai/.env/open-webui-env-guest
|
|
||||||
|
|
||||||
# ai-external is the primary network
|
|
||||||
Network=ai-external.network
|
|
||||||
Network=ai-internal.network
|
|
||||||
|
|
||||||
# open-webui
|
|
||||||
PublishPort=8081:8081/tcp
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Restart=on-failure
|
|
||||||
RestartSec=5
|
|
||||||
# Extend Timeout to allow time to pull the image
|
|
||||||
TimeoutStartSec=900
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
# Start by default on boot
|
|
||||||
WantedBy=multi-user.target default.target
|
|
||||||
1
active/device_keychron/Keymap-Q8 KNOB-11-10-44.json
Normal file
1
active/device_keychron/Keymap-Q8 KNOB-11-10-44.json
Normal file
File diff suppressed because one or more lines are too long
14
active/device_keychron/README.md
Normal file
14
active/device_keychron/README.md
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
# Keychron
|
||||||
|
|
||||||
|
## VIA
|
||||||
|
|
||||||
|
<`https://launcher.keychron.com/#/keymap`>
|
||||||
|
|
||||||
|
On linux with chromium you'll sometimes see "failed to connect" errors. This can
|
||||||
|
be resolved with `chmod a+rw /dev/hidrawX` where `X` is the id of the keyboard.
|
||||||
|
|
||||||
|
## Q8 Alice
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|

|
||||||
BIN
active/device_keychron/q8_L1.png
Normal file
BIN
active/device_keychron/q8_L1.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 231 KiB |
BIN
active/device_keychron/q8_L2.png
Normal file
BIN
active/device_keychron/q8_L2.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 230 KiB |
@@ -854,11 +854,12 @@ sudo dnf install -y koji
|
|||||||
|
|
||||||
# Search for the desired kernel version
|
# Search for the desired kernel version
|
||||||
koji search build kernel-6.18.3*
|
koji search build kernel-6.18.3*
|
||||||
|
export KERNEL_VERSION=6.18.12
|
||||||
|
|
||||||
# Create a temporary directory to store the donwloaded kernel packages
|
# Create a temporary directory to store the donwloaded kernel packages
|
||||||
sudo -i
|
sudo -i
|
||||||
mkdir /root/kernel-download-6.18.3
|
mkdir /tmp/kernel-download-${KERNEL_VERSION}
|
||||||
cd /root/kernel-download-6.18.3
|
cd /tmp/kernel-download-${KERNEL_VERSION}
|
||||||
|
|
||||||
# Download the kernel packages
|
# Download the kernel packages
|
||||||
koji download-build --arch=x86_64 kernel-6.18.3-200.fc43
|
koji download-build --arch=x86_64 kernel-6.18.3-200.fc43
|
||||||
|
|||||||
10
active/os_fedora/fedora.md
Normal file
10
active/os_fedora/fedora.md
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
# Fedora
|
||||||
|
|
||||||
|
## Kernel Rescue
|
||||||
|
|
||||||
|
1. Check that `/boot` and `/boot/efi` aren't full
|
||||||
|
2. `mkdir -p /boot/efi/loader/entries`
|
||||||
|
3. `mkdir -p /boot/efi/$(cat /etc/machine-id)`
|
||||||
|
4. Check for other missing directories and create as needed
|
||||||
|
5. `dracut -f --regenerate-all` to regenerate missing kernels
|
||||||
|
6. `dnf reinstall kernel*` to rerun kernel installation scripts
|
||||||
@@ -1,7 +1,11 @@
|
|||||||
[Pod]
|
[Pod]
|
||||||
Network=ai-internal.network
|
Network=ai-internal.network
|
||||||
# llama.cpp
|
# llama.cpp server
|
||||||
PublishPort=8000:8000/tcp
|
PublishPort=8000:8000/tcp
|
||||||
|
# llama.cpp embed
|
||||||
|
PublishPort=8001:8001/tcp
|
||||||
|
# llama.cpp instruct
|
||||||
|
PublishPort=8002:8002/tcp
|
||||||
# stable-diffusion.cpp gen
|
# stable-diffusion.cpp gen
|
||||||
PublishPort=1234:1234/tcp
|
PublishPort=1234:1234/tcp
|
||||||
# stable-diffusion.cpp edit
|
# stable-diffusion.cpp edit
|
||||||
770
active/software_ai_stack/ai_stack.md
Normal file
770
active/software_ai_stack/ai_stack.md
Normal file
@@ -0,0 +1,770 @@
|
|||||||
|
# Self Hosted AI Stack
|
||||||
|
|
||||||
|
- [Self Hosted AI Stack](#self-hosted-ai-stack)
|
||||||
|
- [Notes](#notes)
|
||||||
|
- [Podman Volume Locations](#podman-volume-locations)
|
||||||
|
- [List of Internal Links](#list-of-internal-links)
|
||||||
|
- [Quick Install](#quick-install)
|
||||||
|
- [Text Stack](#text-stack)
|
||||||
|
- [Image Stack](#image-stack)
|
||||||
|
- [Setup](#setup)
|
||||||
|
- [Create the AI user](#create-the-ai-user)
|
||||||
|
- [Helper aliases](#helper-aliases)
|
||||||
|
- [Create the models dir](#create-the-models-dir)
|
||||||
|
- [Install the Hugging Face CLI](#install-the-hugging-face-cli)
|
||||||
|
- [Samba Model Storage](#samba-model-storage)
|
||||||
|
- [Download models](#download-models)
|
||||||
|
- [Text models](#text-models)
|
||||||
|
- [GPT-OSS](#gpt-oss)
|
||||||
|
- [Mistral](#mistral)
|
||||||
|
- [Qwen](#qwen)
|
||||||
|
- [GLM](#glm)
|
||||||
|
- [Gemma](#gemma)
|
||||||
|
- [Dolphin](#dolphin)
|
||||||
|
- [LiquidAI](#liquidai)
|
||||||
|
- [Level 1 Techs](#level-1-techs)
|
||||||
|
- [Image models](#image-models)
|
||||||
|
- [Z-Image](#z-image)
|
||||||
|
- [Flux](#flux)
|
||||||
|
- [Embedding Models](#embedding-models)
|
||||||
|
- [Qwen Embedding](#qwen-embedding)
|
||||||
|
- [Nomic Embedding](#nomic-embedding)
|
||||||
|
- [llama.cpp](#llamacpp)
|
||||||
|
- [stable-diffusion.cpp](#stable-diffusioncpp)
|
||||||
|
- [open-webui](#open-webui)
|
||||||
|
- [lite-llm](#lite-llm)
|
||||||
|
- [Install Services with Quadlets](#install-services-with-quadlets)
|
||||||
|
- [Internal and External Pods](#internal-and-external-pods)
|
||||||
|
- [Llama CPP Server (Port 8000)](#llama-cpp-server-port-8000)
|
||||||
|
- [Llama CPP Embedding Server (Port 8001)](#llama-cpp-embedding-server-port-8001)
|
||||||
|
- [Llama CPP Instruct Server (Port 8002)](#llama-cpp-instruct-server-port-8002)
|
||||||
|
- [Stable Diffusion CPP (Port 1234 and 1235)](#stable-diffusion-cpp-port-1234-and-1235)
|
||||||
|
- [Open Webui (Port 8080)](#open-webui-port-8080)
|
||||||
|
- [Install the update script](#install-the-update-script)
|
||||||
|
- [Install Guest Open Webui with Start/Stop Services](#install-guest-open-webui-with-startstop-services)
|
||||||
|
- [Benchmark Results](#benchmark-results)
|
||||||
|
- [Testing with Curl](#testing-with-curl)
|
||||||
|
- [OpenAI API](#openai-api)
|
||||||
|
- [Misc](#misc)
|
||||||
|
- [Qwen3.5 Settings](#qwen35-settings)
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Shortcut for downloading models
|
||||||
|
hf-download ()
|
||||||
|
{
|
||||||
|
if [ $# -ne 3 ]; then
|
||||||
|
echo "ERROR: Expected 3 arguments, but only got $#" 1>&2
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
BASE_DIR='/opt/ai/models'
|
||||||
|
mkdir -p $BASE_DIR/$1
|
||||||
|
pushd $BASE_DIR/$1 2>&1 >/dev/null
|
||||||
|
hf download --local-dir . $2 $3
|
||||||
|
popd 2>&1 >/dev/null
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Podman Volume Locations
|
||||||
|
|
||||||
|
`~/.local/share/containers/storage/volumes/`
|
||||||
|
|
||||||
|
### List of Internal Links
|
||||||
|
|
||||||
|
- llama-cpp
|
||||||
|
- llama-embed
|
||||||
|
- llama-instruct
|
||||||
|
- image-gen
|
||||||
|
- image-edit
|
||||||
|
- openwebui
|
||||||
|
|
||||||
|
## Quick Install
|
||||||
|
|
||||||
|
### Text Stack
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-playbook \
|
||||||
|
-i ansible/inventory.yaml \
|
||||||
|
active/software_ai_stack/install_ai_text_stack.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Image Stack
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-playbook \
|
||||||
|
-i ansible/inventory.yaml \
|
||||||
|
active/software_ai_stack/install_ai_image_stack.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
## Setup
|
||||||
|
|
||||||
|
### Create the AI user
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create your local ai user. This will be the user you launch podman processes from.
|
||||||
|
useradd -m ai
|
||||||
|
loginctl enable-linger ai
|
||||||
|
su -l ai
|
||||||
|
mkdir -p /home/ai/.config/containers/systemd/
|
||||||
|
mkdir -p /home/ai/.ssh
|
||||||
|
```
|
||||||
|
|
||||||
|
Models are big. You'll want some tools to help find large files quickly when space runs out.
|
||||||
|
|
||||||
|
### Helper aliases
|
||||||
|
|
||||||
|
Add these to your .bashrc:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Calculate all folder sizes in current dir
|
||||||
|
alias {dudir,dud}='du -h --max-depth 1 | sort -h'
|
||||||
|
|
||||||
|
# Calculate all file sizes in current dir
|
||||||
|
alias {dufile,duf}='ls -lhSr'
|
||||||
|
|
||||||
|
# Restart llama-server / follow logs
|
||||||
|
alias llama-reload="systemctl --user daemon-reload && systemctl --user restart llama-server.service"
|
||||||
|
alias llama-logs="journalctl --user -fu llama-server"
|
||||||
|
|
||||||
|
# Restart stable diffusion gen and edit server / follow logs
|
||||||
|
alias sd-gen-reload='systemctl --user daemon-reload && systemctl --user restart stable-diffusion-gen-server'
|
||||||
|
alias sd-gen-logs='journalctl --user -xeu stable-diffusion-gen-server'
|
||||||
|
alias sd-edit-reload='systemctl --user daemon-reload && systemctl --user restart stable-diffusion-edit-server'
|
||||||
|
alias sd-edit-logs='journalctl --user -xeu stable-diffusion-edit-server'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Create the models dir
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir -p /home/ai/models/{text,image,video,embedding,tts,stt}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Install the Hugging Face CLI
|
||||||
|
|
||||||
|
<https://huggingface.co/docs/huggingface_hub/en/guides/cli#getting-started>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Install
|
||||||
|
curl -LsSf https://hf.co/cli/install.sh | bash
|
||||||
|
|
||||||
|
# Login
|
||||||
|
hf auth login
|
||||||
|
```
|
||||||
|
|
||||||
|
### Samba Model Storage
|
||||||
|
|
||||||
|
I recommend adding network storage for keeping models offloaded. This mounts a samba share at `/srv/models`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dnf install -y cifs-utils
|
||||||
|
|
||||||
|
# Add this to /etc/fstab
|
||||||
|
//driveripper.reeselink.com/smb_models /srv/models cifs _netdev,nofail,uid=1001,gid=1001,credentials=/etc/samba/credentials 0 0
|
||||||
|
|
||||||
|
# Then mount
|
||||||
|
systemctl daemon-reload
|
||||||
|
mount -a --mkdir
|
||||||
|
```
|
||||||
|
|
||||||
|
Here are some sync commands that I use to keep the samba share in sync with the home directory:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Sync models from home dir to the samba share
|
||||||
|
rsync -av --progress /home/ai/models/ /srv/models/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Download models
|
||||||
|
|
||||||
|
In general I try to run 8 bit quantized minimum.
|
||||||
|
|
||||||
|
#### Text models
|
||||||
|
|
||||||
|
<https://huggingface.co/ggml-org/collections>
|
||||||
|
|
||||||
|
##### GPT-OSS
|
||||||
|
|
||||||
|
<https://unsloth.ai/docs/models/gpt-oss-how-to-run-and-fine-tune#recommended-settings>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# gpt-oss-120b
|
||||||
|
mkdir gpt-oss-120b && cd gpt-oss-120b
|
||||||
|
hf download --local-dir . ggml-org/gpt-oss-120b-GGUF
|
||||||
|
|
||||||
|
# gpt-oss-20b
|
||||||
|
mkdir gpt-oss-20b && cd gpt-oss-20b
|
||||||
|
hf download --local-dir . ggml-org/gpt-oss-20b-GGUF
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Mistral
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# devstral-small-2-24b
|
||||||
|
mkdir devstral-small-2-24b && cd devstral-small-2-24b
|
||||||
|
hf download --local-dir . ggml-org/Devstral-Small-2-24B-Instruct-2512-GGUF Devstral-Small-2-24B-Instruct-2512-Q8_0.gguf
|
||||||
|
|
||||||
|
# ministral-3-14b
|
||||||
|
mkdir ministral-3-14b && cd ministral-3-14b
|
||||||
|
hf download --local-dir . ggml-org/Ministral-3-14B-Reasoning-2512-GGUF
|
||||||
|
|
||||||
|
# ministral-3-3b-instruct
|
||||||
|
mkdir ministral-3-3b-instruct && cd ministral-3-3b-instruct
|
||||||
|
hf download --local-dir . ggml-org/Ministral-3-3B-Instruct-2512-GGUF
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Qwen
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# qwen3.5-4b
|
||||||
|
mkdir qwen3.5-4b && cd qwen3.5-4b
|
||||||
|
hf download --local-dir . unsloth/Qwen3.5-4B-GGUF Qwen3.5-4B-Q8_0.gguf
|
||||||
|
hf download --local-dir . unsloth/Qwen3.5-4B-GGUF mmproj-F16.gguf
|
||||||
|
|
||||||
|
# qwen3.5-35b-a3b
|
||||||
|
mkdir qwen3.5-35b-a3b && cd qwen3.5-35b-a3b
|
||||||
|
hf download --local-dir . unsloth/Qwen3.5-35B-A3B-GGUF Qwen3.5-35B-A3B-Q8_0.gguf
|
||||||
|
hf download --local-dir . unsloth/Qwen3.5-35B-A3B-GGUF mmproj-F16.gguf
|
||||||
|
|
||||||
|
# qwen3-30b-a3b-instruct
|
||||||
|
mkdir qwen3-30b-a3b-instruct && cd qwen3-30b-a3b-instruct
|
||||||
|
hf download --local-dir . ggml-org/Qwen3-30B-A3B-Instruct-2507-Q8_0-GGUF
|
||||||
|
|
||||||
|
# qwen3-vl-30b-a3b-thinking
|
||||||
|
mkdir qwen3-vl-30b-a3b-thinking && cd qwen3-vl-30b-a3b-thinking
|
||||||
|
hf download --local-dir . Qwen/Qwen3-VL-30B-A3B-Thinking-GGUF Qwen3VL-30B-A3B-Thinking-Q8_0.gguf
|
||||||
|
hf download --local-dir . Qwen/Qwen3-VL-30B-A3B-Thinking-GGUF mmproj-Qwen3VL-30B-A3B-Thinking-F16.gguf
|
||||||
|
|
||||||
|
# qwen3-vl-30b-a3b-instruct
|
||||||
|
mkdir qwen3-vl-30b-a3b-instruct && cd qwen3-vl-30b-a3b-instruct
|
||||||
|
hf download --local-dir . Qwen/Qwen3-VL-30B-A3B-Instruct-GGUF Qwen3VL-30B-A3B-Instruct-Q8_0.gguf
|
||||||
|
hf download --local-dir . Qwen/Qwen3-VL-30B-A3B-Instruct-GGUF mmproj-Qwen3VL-30B-A3B-Instruct-F16.gguf
|
||||||
|
|
||||||
|
# qwen3-coder-30b-a3b-instruct
|
||||||
|
mkdir qwen3-coder-30b-a3b-instruct && cd qwen3-coder-30b-a3b-instruct
|
||||||
|
hf download --local-dir . ggml-org/Qwen3-Coder-30B-A3B-Instruct-Q8_0-GGUF
|
||||||
|
|
||||||
|
# qwen3-coder-next
|
||||||
|
mkdir qwen3-coder-next && cd qwen3-coder-next
|
||||||
|
hf download --local-dir . unsloth/Qwen3-Coder-Next-GGUF --include "Q8_0/*.gguf"
|
||||||
|
|
||||||
|
# qwen3-8b (benchmarks)
|
||||||
|
mkdir qwen3-8b && cd qwen3-8b
|
||||||
|
hf download --local-dir . Qwen/Qwen3-8B-GGUF Qwen3-8B-Q8_0.gguf
|
||||||
|
```
|
||||||
|
|
||||||
|
##### GLM
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# glm-4.7-flash-30b
|
||||||
|
mkdir glm-4.7-flash-30b && cd glm-4.7-flash-30b
|
||||||
|
hf download --local-dir . unsloth/GLM-4.7-Flash-GGUF GLM-4.7-Flash-Q8_0.gguf
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Gemma
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Note "it" vs "pt" suffixes. "it" is instruction following, "pt" is the base model (not as good for out-of-the-box use)
|
||||||
|
# gemma-3-27b-it
|
||||||
|
mkdir gemma-3-27b-it && cd gemma-3-27b-it
|
||||||
|
hf download --local-dir . unsloth/gemma-3-27b-it-GGUF gemma-3-27b-it-Q8_0.gguf
|
||||||
|
hf download --local-dir . unsloth/gemma-3-27b-it-GGUF mmproj-F16.gguf
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Dolphin
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# dolphin-mistral-24b-venice
|
||||||
|
mkdir dolphin-mistral-24b-venice && cd dolphin-mistral-24b-venice
|
||||||
|
hf download --local-dir . bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-Q8_0.gguf
|
||||||
|
```
|
||||||
|
|
||||||
|
##### LiquidAI
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# lfm2-24b
|
||||||
|
mkdir lfm2-24b && cd lfm2-24b
|
||||||
|
hf download --local-dir . LiquidAI/LFM2-24B-A2B-GGUF LFM2-24B-A2B-Q8_0.gguf
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Level 1 Techs
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# kappa-20b
|
||||||
|
# https://huggingface.co/eousphoros/kappa-20b-131k-GGUF-Q8_0/tree/main
|
||||||
|
mkdir kappa-20b && cd kappa-20b
|
||||||
|
hf download --local-dir . eousphoros/kappa-20b-131k-GGUF-Q8_0
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Image models
|
||||||
|
|
||||||
|
##### Z-Image
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# z-turbo
|
||||||
|
# Fastest image generation in 8 steps. Great a text and prompt following.
|
||||||
|
# Lacks variety.
|
||||||
|
mkdir /home/ai/models/image/z-turbo && cd /home/ai/models/image/z-turbo
|
||||||
|
hf download --local-dir . leejet/Z-Image-Turbo-GGUF z_image_turbo-Q8_0.gguf
|
||||||
|
hf download --local-dir . black-forest-labs/FLUX.1-schnell ae.safetensors
|
||||||
|
hf download --local-dir . unsloth/Qwen3-4B-Instruct-2507-GGUF Qwen3-4B-Instruct-2507-Q8_0.gguf
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Flux
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# flux2-klein
|
||||||
|
# Capable of editing images in 4 steps (though 5 is my recommended steps)
|
||||||
|
mkdir /home/ai/models/image/flux2-klein && cd /home/ai/models/image/flux2-klein
|
||||||
|
hf download --local-dir . leejet/FLUX.2-klein-9B-GGUF flux-2-klein-9b-Q8_0.gguf
|
||||||
|
hf download --local-dir . black-forest-labs/FLUX.2-dev ae.safetensors
|
||||||
|
hf download --local-dir . unsloth/Qwen3-8B-GGUF Qwen3-8B-Q8_0.gguf
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Embedding Models
|
||||||
|
|
||||||
|
##### Qwen Embedding
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir qwen3-embed-4b && cd qwen3-embed-4b
|
||||||
|
hf download --local-dir . Qwen/Qwen3-Embedding-4B-GGUF Qwen3-Embedding-4B-Q8_0.gguf
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Nomic Embedding
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# nomic-embed-text-v2
|
||||||
|
mkdir /home/ai/models/embedding/nomic-embed-text-v2
|
||||||
|
hf download --local-dir /home/ai/models/embedding/nomic-embed-text-v2 ggml-org/Nomic-Embed-Text-V2-GGUF
|
||||||
|
```
|
||||||
|
|
||||||
|
## llama.cpp
|
||||||
|
|
||||||
|
<https://github.com/ggml-org/llama.cpp/tree/master/tools/server>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Build the llama.cpp container image
|
||||||
|
git clone https://github.com/ggml-org/llama.cpp.git
|
||||||
|
cd llama.cpp
|
||||||
|
export BUILD_TAG=$(date +"%Y-%m-%d-%H-%M-%S")
|
||||||
|
|
||||||
|
# Vulkan (better performance as of Feb 2026)
|
||||||
|
podman build -f .devops/vulkan.Dockerfile -t llama-cpp-vulkan:${BUILD_TAG} -t llama-cpp-vulkan:latest .
|
||||||
|
|
||||||
|
# ROCM
|
||||||
|
podman build -f .devops/rocm.Dockerfile -t llama-cpp-rocm:${BUILD_TAG} -t llama-cpp-rocm:latest .
|
||||||
|
|
||||||
|
# Run llama demo server (Available on port 8000)
|
||||||
|
podman run \
|
||||||
|
--rm \
|
||||||
|
--name llama-server-demo \
|
||||||
|
--device=/dev/kfd \
|
||||||
|
--device=/dev/dri \
|
||||||
|
-v /home/ai/models/text:/models:z \
|
||||||
|
-p 8010:8000 \
|
||||||
|
localhost/llama-cpp-vulkan:latest \
|
||||||
|
--host 0.0.0.0 \
|
||||||
|
--port 8000 \
|
||||||
|
-c 16000 \
|
||||||
|
--perf \
|
||||||
|
--n-gpu-layers all \
|
||||||
|
--jinja \
|
||||||
|
--models-max 1 \
|
||||||
|
--models-dir /models \
|
||||||
|
--chat-template-kwargs '{"enable_thinking": false}' \
|
||||||
|
-m /models/qwen3.5-35b-a3b
|
||||||
|
```
|
||||||
|
|
||||||
|
Embedding models
|
||||||
|
|
||||||
|
```bash
|
||||||
|
podman run \
|
||||||
|
--rm \
|
||||||
|
--name llama-server-demo \
|
||||||
|
--device=/dev/kfd \
|
||||||
|
--device=/dev/dri \
|
||||||
|
-v /home/ai/models/text:/models:z \
|
||||||
|
-p 8000:8000 \
|
||||||
|
localhost/llama-cpp-vulkan:latest \
|
||||||
|
--host 0.0.0.0 \
|
||||||
|
--port 8001 \
|
||||||
|
-c 512 \
|
||||||
|
--perf \
|
||||||
|
--n-gpu-layers all \
|
||||||
|
--models-max 1 \
|
||||||
|
--models-dir /models \
|
||||||
|
--embedding
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Test with curl
|
||||||
|
curl -X POST "https://llama-embed.reeselink.com/embedding" --data '{"model": "qwen3-embed-4b", "content":"Star Wars is better than Star Trek"}'
|
||||||
|
```
|
||||||
|
|
||||||
|
## stable-diffusion.cpp
|
||||||
|
|
||||||
|
Server: <https://github.com/leejet/stable-diffusion.cpp/tree/master/examples/server>
|
||||||
|
|
||||||
|
CLI: <https://github.com/leejet/stable-diffusion.cpp/tree/master/examples/cli>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clone https://github.com/leejet/stable-diffusion.cpp.git
|
||||||
|
cd stable-diffusion.cpp
|
||||||
|
git submodule update --init --recursive
|
||||||
|
export BUILD_TAG=$(date +"%Y-%m-%d-%H-%M-%S")
|
||||||
|
|
||||||
|
# Vulkan
|
||||||
|
podman build -f Dockerfile.vulkan -t stable-diffusion-cpp:${BUILD_TAG} -t stable-diffusion-cpp:latest .
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate an image with z-turbo
|
||||||
|
podman run --rm \
|
||||||
|
-v /home/ai/models:/models:z \
|
||||||
|
-v /home/ai/output:/output:z \
|
||||||
|
--device /dev/kfd \
|
||||||
|
--device /dev/dri \
|
||||||
|
localhost/stable-diffusion-cpp:latest \
|
||||||
|
--diffusion-model /models/image/z-turbo/z_image_turbo-Q8_0.gguf \
|
||||||
|
--vae /models/image/z-turbo/ae.safetensors \
|
||||||
|
--llm /models/image/z-turbo/Qwen3-4B-Instruct-2507-Q8_0.gguf \
|
||||||
|
-v \
|
||||||
|
--cfg-scale 1.0 \
|
||||||
|
--vae-conv-direct \
|
||||||
|
--diffusion-conv-direct \
|
||||||
|
--fa \
|
||||||
|
--mmap \
|
||||||
|
--seed -1 \
|
||||||
|
--steps 8 \
|
||||||
|
-H 1024 \
|
||||||
|
-W 1024 \
|
||||||
|
-o /output/output.png \
|
||||||
|
-p "A photorealistic dragon"
|
||||||
|
|
||||||
|
# Edit the generated image with flux2-klein
|
||||||
|
podman run --rm \
|
||||||
|
-v /home/ai/models:/models:z \
|
||||||
|
-v /home/ai/output:/output:z \
|
||||||
|
--device /dev/kfd \
|
||||||
|
--device /dev/dri \
|
||||||
|
localhost/stable-diffusion-cpp:latest \
|
||||||
|
--diffusion-model /models/image/flux2-klein/flux-2-klein-9b-Q8_0.gguf \
|
||||||
|
--vae /models/image/flux2-klein/ae.safetensors \
|
||||||
|
--llm /models/image/flux2-klein/Qwen3-8B-Q8_0.gguf \
|
||||||
|
-v \
|
||||||
|
--cfg-scale 1.0 \
|
||||||
|
--sampling-method euler \
|
||||||
|
--vae-conv-direct \
|
||||||
|
--diffusion-conv-direct \
|
||||||
|
--fa \
|
||||||
|
--mmap \
|
||||||
|
--steps 5 \
|
||||||
|
-H 1024 \
|
||||||
|
-W 1024 \
|
||||||
|
-r /output/output.png \
|
||||||
|
-o /output/edit.png \
|
||||||
|
-p "Replace the dragon with an old car"
|
||||||
|
|
||||||
|
# Video generation with wan2.2
|
||||||
|
podman run --rm \
|
||||||
|
-v /home/ai/models:/models:z \
|
||||||
|
-v /home/ai/output:/output:z \
|
||||||
|
--device /dev/kfd \
|
||||||
|
--device /dev/dri \
|
||||||
|
localhost/stable-diffusion-cpp:latest \
|
||||||
|
-M vid_gen \
|
||||||
|
--diffusion-model /models/video/wan2.2/Wan2.2-T2V-A14B-LowNoise-Q5_K_M.gguf \
|
||||||
|
--high-noise-diffusion-model /models/video/wan2.2/Wan2.2-T2V-A14B-HighNoise-Q5_K_M.gguf \
|
||||||
|
--vae /models/video/wan2.2/wan_2.1_vae.safetensors \
|
||||||
|
--t5xxl /models/video/wan2.2/umt5-xxl-encoder-Q5_K_M.gguf \
|
||||||
|
--cfg-scale 3.5 \
|
||||||
|
--sampling-method euler \
|
||||||
|
--steps 10 \
|
||||||
|
--high-noise-cfg-scale 3.5 \
|
||||||
|
--high-noise-sampling-method euler \
|
||||||
|
--high-noise-steps 8 \
|
||||||
|
--vae-conv-direct \
|
||||||
|
--diffusion-conv-direct \
|
||||||
|
--vae-tiling \
|
||||||
|
-v \
|
||||||
|
-n "Colorful tones, overexposed, static, blurred details, subtitles, style, artwork, painting, picture, still, overall graying, worst quality, low quality, JPEG compression residue, ugly, mutilated, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, deformed limbs, finger fusion, still pictures, messy backgrounds, three legs, many people in the background, walking backwards" \
|
||||||
|
-W 512 \
|
||||||
|
-H 512 \
|
||||||
|
--diffusion-fa \
|
||||||
|
--video-frames 24 \
|
||||||
|
--flow-shift 3.0 \
|
||||||
|
-o /output/video_output \
|
||||||
|
-p "A normal business meeting. People discuss business for 2 seconds. Suddenly, a horde of furries carrying assault rifles bursts into the room and causes a panic. Hatsune Miku leads the charge screaming in rage."
|
||||||
|
```
|
||||||
|
|
||||||
|
## open-webui
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir /home/ai/.env
|
||||||
|
# Create a file called open-webui-env with `WEBUI_SECRET_KEY="some-random-key"
|
||||||
|
scp active/software_ai_stack/secrets/open-webui-env deskwork-ai:.env/
|
||||||
|
|
||||||
|
# Will be available on port 8080
|
||||||
|
podman run \
|
||||||
|
-d \
|
||||||
|
-p 8080:8080 \
|
||||||
|
-v open-webui:/app/backend/data \
|
||||||
|
--name open-webui \
|
||||||
|
--restart always \
|
||||||
|
ghcr.io/open-webui/open-webui:main
|
||||||
|
```
|
||||||
|
|
||||||
|
Use the following connections:
|
||||||
|
|
||||||
|
| Service | Endpoint |
|
||||||
|
| ------------------------- | ----------------------------------------- |
|
||||||
|
| llama.cpp server | <http://host.containers.internal:8000> |
|
||||||
|
| llama.cpp embed | <http://host.containers.internal:8001> |
|
||||||
|
| stable-diffusion.cpp | <http://host.containers.internal:1234/v1> |
|
||||||
|
| stable-diffusion.cpp edit | <http://host.containers.internal:1235/v1> |
|
||||||
|
|
||||||
|
## lite-llm
|
||||||
|
|
||||||
|
<https://docs.litellm.ai/docs/proxy/configs>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
podman run \
|
||||||
|
--rm \
|
||||||
|
--name litellm \
|
||||||
|
-p 4000:4000
|
||||||
|
```
|
||||||
|
|
||||||
|
## Install Services with Quadlets
|
||||||
|
|
||||||
|
### Internal and External Pods
|
||||||
|
|
||||||
|
These will be used to restrict internet access to our llama.cpp and
|
||||||
|
stable-diffusion.cpp services while allowing the frontend services to
|
||||||
|
communicate with those containers.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
scp -r active/software_ai_stack/quadlets_pods/* deskwork-ai:.config/containers/systemd/
|
||||||
|
ssh deskwork-ai
|
||||||
|
systemctl --user daemon-reload
|
||||||
|
systemctl --user start ai-internal-pod.service ai-external-pod.service
|
||||||
|
```
|
||||||
|
|
||||||
|
### Llama CPP Server (Port 8000)
|
||||||
|
|
||||||
|
Installs the llama.cpp server to run our text models.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
scp -r active/software_ai_stack/quadlets_llama_think/* deskwork-ai:.config/containers/systemd/
|
||||||
|
ssh deskwork-ai
|
||||||
|
systemctl --user daemon-reload
|
||||||
|
systemctl --user restart ai-internal-pod.service
|
||||||
|
```
|
||||||
|
|
||||||
|
### Llama CPP Embedding Server (Port 8001)
|
||||||
|
|
||||||
|
Installs the llama.cpp server to run our embedding models
|
||||||
|
|
||||||
|
```bash
|
||||||
|
scp -r active/software_ai_stack/quadlets_llama_embed/* deskwork-ai:.config/containers/systemd/
|
||||||
|
ssh deskwork-ai
|
||||||
|
systemctl --user daemon-reload
|
||||||
|
systemctl --user restart ai-internal-pod.service
|
||||||
|
```
|
||||||
|
|
||||||
|
### Llama CPP Instruct Server (Port 8002)
|
||||||
|
|
||||||
|
Installs the llama.cpp server to run a constant instruct (no thinking) model for quick replies
|
||||||
|
|
||||||
|
```bash
|
||||||
|
scp -r active/software_ai_stack/quadlets_llama_instruct/* deskwork-ai:.config/containers/systemd/
|
||||||
|
ssh deskwork-ai
|
||||||
|
systemctl --user daemon-reload
|
||||||
|
systemctl --user restart ai-internal-pod.service
|
||||||
|
```
|
||||||
|
|
||||||
|
### Stable Diffusion CPP (Port 1234 and 1235)
|
||||||
|
|
||||||
|
Installs the stable-diffusion.cpp server to run our image models.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
scp -r active/software_ai_stack/quadlets_stable_diffusion/* deskwork-ai:.config/containers/systemd/
|
||||||
|
ssh deskwork-ai
|
||||||
|
systemctl --user daemon-reload
|
||||||
|
systemctl --user restart ai-internal-pod.service
|
||||||
|
```
|
||||||
|
|
||||||
|
### Open Webui (Port 8080)
|
||||||
|
|
||||||
|
Installs the open webui frontend.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
scp -r active/software_ai_stack/quadlets_openwebui/* deskwork-ai:.config/containers/systemd/
|
||||||
|
ssh deskwork-ai
|
||||||
|
systemctl --user daemon-reload
|
||||||
|
systemctl --user restart ai-external-pod.service
|
||||||
|
```
|
||||||
|
|
||||||
|
Note, all services will be available at `host.containers.internal`. So llama.cpp
|
||||||
|
will be up at `http://host.containers.internal:8000`.
|
||||||
|
|
||||||
|
### Install the update script
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Builds the latest llama.cpp and stable-diffusion.cpp
|
||||||
|
# 2. Pulls the latest open-webui
|
||||||
|
# 3. Restarts all services
|
||||||
|
scp active/software_ai_stack/update-script.sh deskwork-ai:
|
||||||
|
ssh deskwork-ai
|
||||||
|
chmod +x update-script.sh
|
||||||
|
./update-script.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### Install Guest Open Webui with Start/Stop Services
|
||||||
|
|
||||||
|
Optionally install a guest openwebui service.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
scp -r active/software_ai_stack/systemd/. deskwork-ai:.config/systemd/user/
|
||||||
|
ssh deskwork-ai
|
||||||
|
systemctl --user daemon-reload
|
||||||
|
systemctl --user enable open-webui-guest-start.timer
|
||||||
|
systemctl --user enable open-webui-guest-stop.timer
|
||||||
|
```
|
||||||
|
|
||||||
|
## Benchmark Results
|
||||||
|
|
||||||
|
Benchmarks are run with [unsloth gpt-oss-20b Q8_0](https://huggingface.co/unsloth/gpt-oss-20b-GGUF/blob/main/gpt-oss-20b-Q8_0.gguf)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run the llama.cpp pod (AMD)
|
||||||
|
podman run -it --rm \
|
||||||
|
--device=/dev/kfd \
|
||||||
|
--device=/dev/dri \
|
||||||
|
-v /home/ai/models/text:/models:z \
|
||||||
|
--entrypoint /bin/bash \
|
||||||
|
ghcr.io/ggml-org/llama.cpp:full-vulkan
|
||||||
|
|
||||||
|
# Benchmark command
|
||||||
|
./llama-bench -m /models/gpt-oss-20b/gpt-oss-20b-Q8_0.gguf -p 4096 -n 1024
|
||||||
|
```
|
||||||
|
|
||||||
|
Framework Desktop
|
||||||
|
|
||||||
|
| model | size | params | backend | ngl | test | t/s |
|
||||||
|
| ---------------- | --------: | ------: | ------- | ---: | -----: | ------------: |
|
||||||
|
| gpt-oss 20B Q8_0 | 11.27 GiB | 20.91 B | Vulkan | 99 | pp4096 | 992.74 ± 6.07 |
|
||||||
|
| gpt-oss 20B Q8_0 | 11.27 GiB | 20.91 B | Vulkan | 99 | tg1024 | 75.82 ± 0.07 |
|
||||||
|
|
||||||
|
AMD R9700
|
||||||
|
|
||||||
|
| model | size | params | backend | ngl | test | t/s |
|
||||||
|
| ---------------- | --------: | ------: | ------- | ---: | -----: | -------------: |
|
||||||
|
| gpt-oss 20B Q8_0 | 11.27 GiB | 20.91 B | Vulkan | 99 | pp4096 | 3190.85 ± 8.24 |
|
||||||
|
| gpt-oss 20B Q8_0 | 11.27 GiB | 20.91 B | Vulkan | 99 | tg1024 | 168.73 ± 0.15 |
|
||||||
|
|
||||||
|
NVIDIA GeForce RTX 4080 SUPER
|
||||||
|
|
||||||
|
| model | size | params | backend | ngl | test | t/s |
|
||||||
|
| ---------------- | --------: | ------: | ------- | ---: | ----: | ------------: |
|
||||||
|
| gpt-oss 20B Q8_0 | 11.27 GiB | 20.91 B | CUDA | 99 | tg128 | 193.28 ± 1.03 |
|
||||||
|
| gpt-oss 20B Q8_0 | 11.27 GiB | 20.91 B | CUDA | 99 | tg256 | 193.55 ± 0.34 |
|
||||||
|
| gpt-oss 20B Q8_0 | 11.27 GiB | 20.91 B | CUDA | 99 | tg512 | 187.39 ± 0.10 |
|
||||||
|
|
||||||
|
NVIDIA GeForce RTX 3090
|
||||||
|
|
||||||
|
| model | size | params | backend | ngl | test | t/s |
|
||||||
|
| ---------------- | --------: | ------: | ----------- | ---: | -----: | --------------: |
|
||||||
|
| gpt-oss 20B Q8_0 | 11.27 GiB | 20.91 B | CUDA,Vulkan | 99 | pp4096 | 3034.03 ± 80.36 |
|
||||||
|
| gpt-oss 20B Q8_0 | 11.27 GiB | 20.91 B | CUDA,Vulkan | 99 | tg1024 | 181.05 ± 9.01 |
|
||||||
|
|
||||||
|
Apple M4 max
|
||||||
|
|
||||||
|
| model | test | t/s |
|
||||||
|
| :---------------------------- | -----: | -------------: |
|
||||||
|
| unsloth/gpt-oss-20b-Q8_0-GGUF | pp2048 | 1579.12 ± 7.12 |
|
||||||
|
| unsloth/gpt-oss-20b-Q8_0-GGUF | tg32 | 113.00 ± 2.81 |
|
||||||
|
|
||||||
|
## Testing with Curl
|
||||||
|
|
||||||
|
### OpenAI API
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export TOKEN=$(cat active/software_ai_stack/secrets/aipi-token)
|
||||||
|
|
||||||
|
# List Models
|
||||||
|
curl https://aipi.reeseapps.com/v1/models \
|
||||||
|
-H "Authorization: Bearer $TOKEN" | jq
|
||||||
|
|
||||||
|
# Text
|
||||||
|
curl https://aipi.reeseapps.com/v1/chat/completions \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "Authorization: Bearer $TOKEN" \
|
||||||
|
-d '{
|
||||||
|
"model": "llama-instruct/instruct",
|
||||||
|
"messages": [
|
||||||
|
{"role": "system", "content": "You are a helpful assistant."},
|
||||||
|
{"role": "user", "content": "Hello, how are you?"}
|
||||||
|
],
|
||||||
|
"temperature": 0.7,
|
||||||
|
"max_tokens": 500
|
||||||
|
}' | jq
|
||||||
|
|
||||||
|
# Completion
|
||||||
|
curl https://aipi.reeseapps.com/v1/completions \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "Authorization: Bearer $TOKEN" \
|
||||||
|
-d '{
|
||||||
|
"model": "llama-instruct/instruct",
|
||||||
|
"prompt": "Write a short poem about the ocean.",
|
||||||
|
"temperature": 0.7,
|
||||||
|
"max_tokens": 500,
|
||||||
|
"top_p": 1,
|
||||||
|
"frequency_penalty": 0,
|
||||||
|
"presence_penalty": 0
|
||||||
|
}' | jq
|
||||||
|
|
||||||
|
# Image Gen
|
||||||
|
curl https://aipi.reeseapps.com/v1/images/generations \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "Authorization: Bearer $TOKEN" \
|
||||||
|
-d '{
|
||||||
|
"model": "sdd-gen/sd-cpp-local",
|
||||||
|
"prompt": "A futuristic city with flying cars at sunset, digital art",
|
||||||
|
"n": 1,
|
||||||
|
"size": "1024x1024"
|
||||||
|
}' | jq
|
||||||
|
|
||||||
|
# Image Edit
|
||||||
|
curl http://aipi.reeseapps.com/v1/images/edits \
|
||||||
|
-H "Authorization: Bearer $TOKEN" \
|
||||||
|
-d '{
|
||||||
|
"model": "sdd-edit/sd-cpp-local",
|
||||||
|
"image": "@path/to/your/image.jpg",
|
||||||
|
"prompt": "Add a sunset background",
|
||||||
|
"n": 1,
|
||||||
|
"size": "1024x1024"
|
||||||
|
}'
|
||||||
|
|
||||||
|
# Embed
|
||||||
|
curl \
|
||||||
|
"https://aipi.reeseapps.com/v1/embeddings" \
|
||||||
|
-H "Authorization: Bearer $TOKEN" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"model": "llama-embed/embed",
|
||||||
|
"input":"This is the reason you ended up here:",
|
||||||
|
"encoding_format": "float"
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
## Misc
|
||||||
|
|
||||||
|
### Qwen3.5 Settings
|
||||||
|
|
||||||
|
> We recommend using the following set of sampling parameters for generation
|
||||||
|
|
||||||
|
- Non-thinking mode for text tasks: temperature=1.0, top_p=1.00, top_k=20, min_p=0.0, presence_penalty=2.0, repetition_penalty=1.0
|
||||||
|
- Non-thinking mode for VL tasks: temperature=0.7, top_p=0.80, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
|
||||||
|
- Thinking mode for text tasks: temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
|
||||||
|
- Thinking mode for VL or precise coding (e.g. WebDev) tasks : temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0
|
||||||
|
|
||||||
|
> Please note that the support for sampling parameters varies according to inference frameworks.
|
||||||
23
active/software_ai_stack/install_ai_image_stack.yaml
Normal file
23
active/software_ai_stack/install_ai_image_stack.yaml
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
- name: Create Deskwork AI Stack
|
||||||
|
hosts: toybox-ai
|
||||||
|
tasks:
|
||||||
|
- name: Create /home/ai/.config/containers/systemd
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: /home/ai/.config/containers/systemd
|
||||||
|
state: directory
|
||||||
|
mode: "0755"
|
||||||
|
- name: Copy Quadlets
|
||||||
|
template:
|
||||||
|
src: "{{ item }}"
|
||||||
|
dest: "/home/ai/.config/containers/systemd/{{ item }}"
|
||||||
|
loop:
|
||||||
|
- ai-internal.network
|
||||||
|
- ai-internal.pod
|
||||||
|
- stable-diffusion-gen-server.container
|
||||||
|
- stable-diffusion-edit-server.container
|
||||||
|
- name: Reload and start the ai-internal-pod service
|
||||||
|
ansible.builtin.systemd_service:
|
||||||
|
state: restarted
|
||||||
|
name: ai-internal-pod.service
|
||||||
|
daemon_reload: true
|
||||||
|
scope: user
|
||||||
24
active/software_ai_stack/install_ai_text_stack.yaml
Normal file
24
active/software_ai_stack/install_ai_text_stack.yaml
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
- name: Create Deskwork AI Stack
|
||||||
|
hosts: deskwork-ai
|
||||||
|
tasks:
|
||||||
|
- name: Create /home/ai/.config/containers/systemd
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: /home/ai/.config/containers/systemd
|
||||||
|
state: directory
|
||||||
|
mode: "0755"
|
||||||
|
- name: Copy Quadlets
|
||||||
|
template:
|
||||||
|
src: "{{ item }}"
|
||||||
|
dest: "/home/ai/.config/containers/systemd/{{ item }}"
|
||||||
|
loop:
|
||||||
|
- ai-internal.network
|
||||||
|
- ai-internal.pod
|
||||||
|
- llama-embed.container
|
||||||
|
- llama-instruct.container
|
||||||
|
- llama-think.container
|
||||||
|
- name: Reload and start the ai-internal-pod service
|
||||||
|
ansible.builtin.systemd_service:
|
||||||
|
state: restarted
|
||||||
|
name: ai-internal-pod.service
|
||||||
|
daemon_reload: true
|
||||||
|
scope: user
|
||||||
44
active/software_ai_stack/llama-embed.container
Normal file
44
active/software_ai_stack/llama-embed.container
Normal file
@@ -0,0 +1,44 @@
|
|||||||
|
[Unit]
|
||||||
|
Description=A Llama CPP Server For Embedding Models
|
||||||
|
|
||||||
|
[Container]
|
||||||
|
# Shared AI internal pod
|
||||||
|
Pod=ai-internal.pod
|
||||||
|
|
||||||
|
# Image is built locally via podman build
|
||||||
|
Image=localhost/llama-cpp-vulkan:latest
|
||||||
|
|
||||||
|
# Downloaded models volume
|
||||||
|
Volume=/home/ai/models/embedding:/models:z
|
||||||
|
|
||||||
|
# GPU Device
|
||||||
|
AddDevice=/dev/kfd
|
||||||
|
AddDevice=/dev/dri
|
||||||
|
|
||||||
|
# Server command
|
||||||
|
Exec=--port 8001 \
|
||||||
|
-c 0 \
|
||||||
|
--perf \
|
||||||
|
--n-gpu-layers all \
|
||||||
|
--models-max 1 \
|
||||||
|
--models-dir /models \
|
||||||
|
--embedding \
|
||||||
|
-m /models/qwen3-embed-4b/Qwen3-Embedding-4B-Q8_0.gguf \
|
||||||
|
--alias embed
|
||||||
|
|
||||||
|
# Health Check
|
||||||
|
HealthCmd=CMD-SHELL curl --fail http://127.0.0.1:8001/props || exit 1
|
||||||
|
HealthInterval=10s
|
||||||
|
HealthRetries=3
|
||||||
|
HealthStartPeriod=10s
|
||||||
|
HealthTimeout=30s
|
||||||
|
HealthOnFailure=kill
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Restart=always
|
||||||
|
# Extend Timeout to allow time to pull the image
|
||||||
|
TimeoutStartSec=900
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
# Start by default on boot
|
||||||
|
WantedBy=multi-user.target default.target
|
||||||
51
active/software_ai_stack/llama-instruct.container
Normal file
51
active/software_ai_stack/llama-instruct.container
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
[Unit]
|
||||||
|
Description=A Llama CPP Server Running GPT OSS 120b
|
||||||
|
|
||||||
|
[Container]
|
||||||
|
# Shared AI internal pod
|
||||||
|
Pod=ai-internal.pod
|
||||||
|
|
||||||
|
# Image is built locally via podman build
|
||||||
|
Image=localhost/llama-cpp-vulkan:latest
|
||||||
|
|
||||||
|
# Downloaded models volume
|
||||||
|
Volume=/home/ai/models/text:/models:z
|
||||||
|
|
||||||
|
# GPU Device
|
||||||
|
AddDevice=/dev/kfd
|
||||||
|
AddDevice=/dev/dri
|
||||||
|
|
||||||
|
# Server command
|
||||||
|
Exec=--port 8002 \
|
||||||
|
-c 16000 \
|
||||||
|
--perf \
|
||||||
|
-v \
|
||||||
|
--top-k 20 \
|
||||||
|
--top-p 0.8 \
|
||||||
|
--min-p 0 \
|
||||||
|
--presence-penalty 1.5 \
|
||||||
|
--repeat-penalty 1 \
|
||||||
|
--temp 0.7 \
|
||||||
|
--n-gpu-layers all \
|
||||||
|
--jinja \
|
||||||
|
--chat-template-kwargs '{"enable_thinking": false}' \
|
||||||
|
-m /models/qwen3.5-35b-a3b/Qwen3.5-35B-A3B-Q8_0.gguf \
|
||||||
|
--mmproj /models/qwen3.5-35b-a3b/mmproj-F16.gguf \
|
||||||
|
--alias instruct
|
||||||
|
|
||||||
|
# Health Check
|
||||||
|
HealthCmd=CMD-SHELL curl --fail http://127.0.0.1:8000/health || exit 1
|
||||||
|
HealthInterval=10s
|
||||||
|
HealthRetries=3
|
||||||
|
HealthStartPeriod=10s
|
||||||
|
HealthTimeout=30s
|
||||||
|
HealthOnFailure=kill
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Restart=always
|
||||||
|
# Extend Timeout to allow time to pull the image
|
||||||
|
TimeoutStartSec=900
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
# Start by default on boot
|
||||||
|
WantedBy=multi-user.target default.target
|
||||||
@@ -17,7 +17,7 @@ AddDevice=/dev/dri
|
|||||||
|
|
||||||
# Server command
|
# Server command
|
||||||
Exec=--port 8000 \
|
Exec=--port 8000 \
|
||||||
-c 48000 \
|
-c 64000 \
|
||||||
--perf \
|
--perf \
|
||||||
--n-gpu-layers all \
|
--n-gpu-layers all \
|
||||||
--jinja \
|
--jinja \
|
||||||
@@ -25,7 +25,7 @@ Exec=--port 8000 \
|
|||||||
--models-dir /models
|
--models-dir /models
|
||||||
|
|
||||||
# Health Check
|
# Health Check
|
||||||
HealthCmd=CMD-SHELL curl --fail http://127.0.0.1:8000/props || exit 1
|
HealthCmd=CMD-SHELL curl --fail http://127.0.0.1:8000/health || exit 1
|
||||||
HealthInterval=10s
|
HealthInterval=10s
|
||||||
HealthRetries=3
|
HealthRetries=3
|
||||||
HealthStartPeriod=10s
|
HealthStartPeriod=10s
|
||||||
@@ -3,7 +3,7 @@ Description=An Open Webui Frontend for Local AI Services
|
|||||||
|
|
||||||
[Container]
|
[Container]
|
||||||
# Shared AI external pod
|
# Shared AI external pod
|
||||||
Pod=ai-external.pod
|
PublishPort=8080:8080
|
||||||
|
|
||||||
# Open Webui base image
|
# Open Webui base image
|
||||||
Image=ghcr.io/open-webui/open-webui:main
|
Image=ghcr.io/open-webui/open-webui:main
|
||||||
133
active/software_ai_stack/openai-example.py
Normal file
133
active/software_ai_stack/openai-example.py
Normal file
@@ -0,0 +1,133 @@
|
|||||||
|
import base64
|
||||||
|
import os
|
||||||
|
from datetime import datetime
|
||||||
|
from io import BytesIO
|
||||||
|
|
||||||
|
import requests
|
||||||
|
from PIL import Image
|
||||||
|
|
||||||
|
# Configuration
|
||||||
|
BASE_URL = "https://llama-cpp.reeselink.com"
|
||||||
|
API_KEY = os.getenv("LLAMA_CPP_API_KEY", "") # Set if required
|
||||||
|
|
||||||
|
|
||||||
|
def call_api(endpoint, method="GET", data=None):
|
||||||
|
"""Generic API call helper"""
|
||||||
|
url = f"{BASE_URL}/v1/{endpoint}"
|
||||||
|
headers = {"Content-Type": "application/json"}
|
||||||
|
if API_KEY:
|
||||||
|
headers["Authorization"] = f"Bearer {API_KEY}"
|
||||||
|
|
||||||
|
response = requests.request(method, url, headers=headers, json=data)
|
||||||
|
return response
|
||||||
|
|
||||||
|
|
||||||
|
# 1. List Models
|
||||||
|
models_response = call_api("models")
|
||||||
|
models = models_response.json().get("data", [])
|
||||||
|
print(f"Available models: {[m['id'] for m in models]}")
|
||||||
|
|
||||||
|
# 2. Use First Model
|
||||||
|
model_id = models[1]["id"]
|
||||||
|
|
||||||
|
# 3. Chat Completion
|
||||||
|
chat_data = {
|
||||||
|
"model": model_id,
|
||||||
|
"messages": [
|
||||||
|
{"role": "system", "content": "You are helpful."},
|
||||||
|
{"role": "user", "content": "Tell me about Everquest!"},
|
||||||
|
],
|
||||||
|
"temperature": 0.95,
|
||||||
|
"max_tokens": 100,
|
||||||
|
}
|
||||||
|
response = call_api("chat/completions", "POST", chat_data)
|
||||||
|
print(response.json()["choices"][0]["message"]["content"])
|
||||||
|
|
||||||
|
|
||||||
|
def describe_image(image_path, api_key=None):
|
||||||
|
"""
|
||||||
|
Send an image to the LLM for description
|
||||||
|
"""
|
||||||
|
base_url = "https://llama-cpp.reeselink.com"
|
||||||
|
|
||||||
|
# Read and encode image to base64
|
||||||
|
with open(image_path, "rb") as f:
|
||||||
|
encoded_image = base64.b64encode(f.read()).decode("utf-8")
|
||||||
|
|
||||||
|
# Prepare headers
|
||||||
|
headers = {"Content-Type": "application/json"}
|
||||||
|
if api_key:
|
||||||
|
headers["Authorization"] = f"Bearer {api_key}"
|
||||||
|
|
||||||
|
# Create payload
|
||||||
|
payload = {
|
||||||
|
"model": "qwen3-vl-30b-a3b-instruct", # 👁️ VISION MODEL
|
||||||
|
"messages": [
|
||||||
|
{
|
||||||
|
"role": "user",
|
||||||
|
"content": [
|
||||||
|
{"type": "text", "text": "Describe this image in detail"},
|
||||||
|
{
|
||||||
|
"type": "image_url",
|
||||||
|
"image_url": {"url": f"data:image/jpeg;base64,{encoded_image}"},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"max_tokens": 1000,
|
||||||
|
"temperature": 0.7,
|
||||||
|
}
|
||||||
|
|
||||||
|
# Send request
|
||||||
|
response = requests.post(
|
||||||
|
f"{base_url}/v1/chat/completions", headers=headers, json=payload
|
||||||
|
)
|
||||||
|
|
||||||
|
if response.status_code == 200:
|
||||||
|
return response.json()["choices"][0]["message"]["content"]
|
||||||
|
else:
|
||||||
|
print(f"Error: {response.status_code}")
|
||||||
|
print(response.text)
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
# description = describe_image("generated-image.png", api_key="your_key")
|
||||||
|
# print(description)
|
||||||
|
|
||||||
|
|
||||||
|
def generate_image(prompt, **kwargs):
|
||||||
|
"""
|
||||||
|
Generate image using Stable Diffusion / OpenAI compatible API
|
||||||
|
"""
|
||||||
|
base_url = "http://toybox.reeselink.com:1234/v1"
|
||||||
|
|
||||||
|
payload = {"model": "default", "prompt": prompt, "n": 1, "size": "1024x1024"}
|
||||||
|
|
||||||
|
response = requests.post(
|
||||||
|
f"http://toybox.reeselink.com:1234/v1/images/generations",
|
||||||
|
json=payload,
|
||||||
|
timeout=120,
|
||||||
|
)
|
||||||
|
|
||||||
|
if response.status_code == 200:
|
||||||
|
result = response.json()
|
||||||
|
# Save image
|
||||||
|
image_data = base64.b64decode(result["data"][0]["b64_json"])
|
||||||
|
img = Image.open(BytesIO(image_data))
|
||||||
|
filename = f"generated_{datetime.now().strftime('%Y%m%d_%H%M%S')}.png"
|
||||||
|
img.save(filename)
|
||||||
|
print(f"✅ Saved: {filename}")
|
||||||
|
return result
|
||||||
|
else:
|
||||||
|
print(f"❌ Error: {response.status_code}")
|
||||||
|
print(response.text)
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
# Usage:
|
||||||
|
result = generate_image(
|
||||||
|
prompt="A beautiful sunset over mountains, photorealistic",
|
||||||
|
negative_prompt="blurry, low quality",
|
||||||
|
steps=8,
|
||||||
|
guidance=7.5,
|
||||||
|
)
|
||||||
@@ -21,15 +21,16 @@ Entrypoint=/sd-server
|
|||||||
# Server args
|
# Server args
|
||||||
Exec=-l 0.0.0.0 \
|
Exec=-l 0.0.0.0 \
|
||||||
--listen-port 1235 \
|
--listen-port 1235 \
|
||||||
--diffusion-model /models/image/flux2-klein/flux-2-klein-9b-Q4_0.gguf \
|
--diffusion-model /models/image/flux2-klein/flux-2-klein-9b-Q8_0.gguf \
|
||||||
--vae /models/image/flux2-klein/ae.safetensors \
|
--vae /models/image/flux2-klein/ae.safetensors \
|
||||||
--llm /models/image/flux2-klein/Qwen3-8B-Q4_K_M.gguf \
|
--llm /models/image/flux2-klein/Qwen3-8B-Q8_0.gguf \
|
||||||
-v \
|
-v \
|
||||||
--cfg-scale 1.0 \
|
|
||||||
--sampling-method euler \
|
--sampling-method euler \
|
||||||
|
--cfg-scale 1.0 \
|
||||||
--vae-conv-direct \
|
--vae-conv-direct \
|
||||||
--offload-to-cpu \
|
|
||||||
--diffusion-conv-direct \
|
--diffusion-conv-direct \
|
||||||
|
--fa \
|
||||||
|
--mmap \
|
||||||
--seed -1 \
|
--seed -1 \
|
||||||
--steps 5
|
--steps 5
|
||||||
|
|
||||||
@@ -21,12 +21,15 @@ Entrypoint=/sd-server
|
|||||||
# Server args
|
# Server args
|
||||||
Exec=-l 0.0.0.0 \
|
Exec=-l 0.0.0.0 \
|
||||||
--listen-port 1234 \
|
--listen-port 1234 \
|
||||||
--diffusion-model /models/image/z-turbo/z_image_turbo-Q4_K.gguf \
|
--diffusion-model /models/image/z-turbo/z_image_turbo-Q8_0.gguf \
|
||||||
--vae /models/image/z-turbo/ae.safetensors \
|
--vae /models/image/z-turbo/ae.safetensors \
|
||||||
--llm /models/image/z-turbo/qwen_3_4b.safetensors \
|
--llm /models/image/z-turbo/Qwen3-4B-Instruct-2507-Q8_0.gguf \
|
||||||
-v \
|
-v \
|
||||||
--cfg-scale 1.0 \
|
--cfg-scale 1.0 \
|
||||||
--vae-conv-direct \
|
--vae-conv-direct \
|
||||||
|
--diffusion-conv-direct \
|
||||||
|
--fa \
|
||||||
|
--mmap \
|
||||||
--seed -1 \
|
--seed -1 \
|
||||||
--steps 8
|
--steps 8
|
||||||
|
|
||||||
@@ -56,6 +56,10 @@ version = "*"
|
|||||||
name = "policycoreutils-python-utils"
|
name = "policycoreutils-python-utils"
|
||||||
version = "*"
|
version = "*"
|
||||||
|
|
||||||
|
[[packages]]
|
||||||
|
name = "systemd-container"
|
||||||
|
version = "*"
|
||||||
|
|
||||||
[[customizations.files]]
|
[[customizations.files]]
|
||||||
path = "/root/.inputrc"
|
path = "/root/.inputrc"
|
||||||
mode = "0644"
|
mode = "0644"
|
||||||
|
|||||||
@@ -37,7 +37,7 @@ mkdir /srv/smb/sambauser
|
|||||||
sudo semanage fcontext --add --type "samba_share_t" "/srv/smb(/.*)?"
|
sudo semanage fcontext --add --type "samba_share_t" "/srv/smb(/.*)?"
|
||||||
|
|
||||||
# Run restorecon at the root of the btrfs subvolume
|
# Run restorecon at the root of the btrfs subvolume
|
||||||
sudo restorecon -R /srv
|
sudo restorecon -FRv /srv
|
||||||
```
|
```
|
||||||
|
|
||||||
Edit /etc/samba/smb.conf
|
Edit /etc/samba/smb.conf
|
||||||
|
|||||||
@@ -9,12 +9,19 @@ fedora:
|
|||||||
minecraft:
|
minecraft:
|
||||||
borg-root:
|
borg-root:
|
||||||
elk:
|
elk:
|
||||||
|
toybox-root:
|
||||||
|
|
||||||
hardware:
|
hardware:
|
||||||
hosts:
|
hosts:
|
||||||
deskwork-root:
|
deskwork-root:
|
||||||
driveripper:
|
driveripper:
|
||||||
|
|
||||||
|
ai:
|
||||||
|
hosts:
|
||||||
|
ai-ai:
|
||||||
|
deskwork-ai:
|
||||||
|
toybox-ai:
|
||||||
|
|
||||||
caddy:
|
caddy:
|
||||||
hosts:
|
hosts:
|
||||||
proxy:
|
proxy:
|
||||||
|
|||||||
1
keys/nic_ed25519.pub
Normal file
1
keys/nic_ed25519.pub
Normal file
@@ -0,0 +1 @@
|
|||||||
|
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDeo7Zgi2fEuhoLEucLUDCOS/n61Uphbesmz363fedLj ssh@norrath.org
|
||||||
@@ -6,7 +6,11 @@ dependencies = [
|
|||||||
"click==8.2.1",
|
"click==8.2.1",
|
||||||
"mkdocs>=1.6.1",
|
"mkdocs>=1.6.1",
|
||||||
"openai>=2.21.0",
|
"openai>=2.21.0",
|
||||||
|
"pika>=1.3.2",
|
||||||
|
"pillow>=12.1.1",
|
||||||
|
"pytest>=9.0.2",
|
||||||
"pyyaml>=6.0.3",
|
"pyyaml>=6.0.3",
|
||||||
|
"requests>=2.32.5",
|
||||||
"tqdm>=4.67.3",
|
"tqdm>=4.67.3",
|
||||||
"types-pyyaml>=6.0.12.20250915",
|
"types-pyyaml>=6.0.12.20250915",
|
||||||
"types-tqdm>=4.67.3.20260205",
|
"types-tqdm>=4.67.3.20260205",
|
||||||
|
|||||||
3
templates/container/compose/README.md
Normal file
3
templates/container/compose/README.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Compose
|
||||||
|
|
||||||
|
Put your compose.yaml here.
|
||||||
@@ -59,7 +59,7 @@ Run the following to convert a compose.yaml into the various `.container` files
|
|||||||
podman run \
|
podman run \
|
||||||
--security-opt label=disable \
|
--security-opt label=disable \
|
||||||
--rm \
|
--rm \
|
||||||
-v $(pwd)/active/container_foobar/:/compose \
|
-v $(pwd)/active/container_foobar/compose:/compose \
|
||||||
-v $(pwd)/active/container_foobar/quadlets:/quadlets \
|
-v $(pwd)/active/container_foobar/quadlets:/quadlets \
|
||||||
quay.io/k9withabone/podlet \
|
quay.io/k9withabone/podlet \
|
||||||
-f /quadlets \
|
-f /quadlets \
|
||||||
3
templates/container/quadlets/README.md
Normal file
3
templates/container/quadlets/README.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Quadlets
|
||||||
|
|
||||||
|
Put your quadlets here.
|
||||||
174
uv.lock
generated
174
uv.lock
generated
@@ -32,6 +32,47 @@ wheels = [
|
|||||||
{ url = "https://files.pythonhosted.org/packages/e6/ad/3cc14f097111b4de0040c83a525973216457bbeeb63739ef1ed275c1c021/certifi-2026.1.4-py3-none-any.whl", hash = "sha256:9943707519e4add1115f44c2bc244f782c0249876bf51b6599fee1ffbedd685c", size = 152900, upload-time = "2026-01-04T02:42:40.15Z" },
|
{ url = "https://files.pythonhosted.org/packages/e6/ad/3cc14f097111b4de0040c83a525973216457bbeeb63739ef1ed275c1c021/certifi-2026.1.4-py3-none-any.whl", hash = "sha256:9943707519e4add1115f44c2bc244f782c0249876bf51b6599fee1ffbedd685c", size = 152900, upload-time = "2026-01-04T02:42:40.15Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "charset-normalizer"
|
||||||
|
version = "3.4.4"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/13/69/33ddede1939fdd074bce5434295f38fae7136463422fe4fd3e0e89b98062/charset_normalizer-3.4.4.tar.gz", hash = "sha256:94537985111c35f28720e43603b8e7b43a6ecfb2ce1d3058bbe955b73404e21a", size = 129418, upload-time = "2025-10-14T04:42:32.879Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/97/45/4b3a1239bbacd321068ea6e7ac28875b03ab8bc0aa0966452db17cd36714/charset_normalizer-3.4.4-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:e1f185f86a6f3403aa2420e815904c67b2f9ebc443f045edd0de921108345794", size = 208091, upload-time = "2025-10-14T04:41:13.346Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/7d/62/73a6d7450829655a35bb88a88fca7d736f9882a27eacdca2c6d505b57e2e/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6b39f987ae8ccdf0d2642338faf2abb1862340facc796048b604ef14919e55ed", size = 147936, upload-time = "2025-10-14T04:41:14.461Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/89/c5/adb8c8b3d6625bef6d88b251bbb0d95f8205831b987631ab0c8bb5d937c2/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:3162d5d8ce1bb98dd51af660f2121c55d0fa541b46dff7bb9b9f86ea1d87de72", size = 144180, upload-time = "2025-10-14T04:41:15.588Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/91/ed/9706e4070682d1cc219050b6048bfd293ccf67b3d4f5a4f39207453d4b99/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:81d5eb2a312700f4ecaa977a8235b634ce853200e828fbadf3a9c50bab278328", size = 161346, upload-time = "2025-10-14T04:41:16.738Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/d5/0d/031f0d95e4972901a2f6f09ef055751805ff541511dc1252ba3ca1f80cf5/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:5bd2293095d766545ec1a8f612559f6b40abc0eb18bb2f5d1171872d34036ede", size = 158874, upload-time = "2025-10-14T04:41:17.923Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/f5/83/6ab5883f57c9c801ce5e5677242328aa45592be8a00644310a008d04f922/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a8a8b89589086a25749f471e6a900d3f662d1d3b6e2e59dcecf787b1cc3a1894", size = 153076, upload-time = "2025-10-14T04:41:19.106Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/75/1e/5ff781ddf5260e387d6419959ee89ef13878229732732ee73cdae01800f2/charset_normalizer-3.4.4-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:bc7637e2f80d8530ee4a78e878bce464f70087ce73cf7c1caf142416923b98f1", size = 150601, upload-time = "2025-10-14T04:41:20.245Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/d7/57/71be810965493d3510a6ca79b90c19e48696fb1ff964da319334b12677f0/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f8bf04158c6b607d747e93949aa60618b61312fe647a6369f88ce2ff16043490", size = 150376, upload-time = "2025-10-14T04:41:21.398Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/e5/d5/c3d057a78c181d007014feb7e9f2e65905a6c4ef182c0ddf0de2924edd65/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:554af85e960429cf30784dd47447d5125aaa3b99a6f0683589dbd27e2f45da44", size = 144825, upload-time = "2025-10-14T04:41:22.583Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/e6/8c/d0406294828d4976f275ffbe66f00266c4b3136b7506941d87c00cab5272/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:74018750915ee7ad843a774364e13a3db91682f26142baddf775342c3f5b1133", size = 162583, upload-time = "2025-10-14T04:41:23.754Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/d7/24/e2aa1f18c8f15c4c0e932d9287b8609dd30ad56dbe41d926bd846e22fb8d/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:c0463276121fdee9c49b98908b3a89c39be45d86d1dbaa22957e38f6321d4ce3", size = 150366, upload-time = "2025-10-14T04:41:25.27Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/e4/5b/1e6160c7739aad1e2df054300cc618b06bf784a7a164b0f238360721ab86/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:362d61fd13843997c1c446760ef36f240cf81d3ebf74ac62652aebaf7838561e", size = 160300, upload-time = "2025-10-14T04:41:26.725Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/7a/10/f882167cd207fbdd743e55534d5d9620e095089d176d55cb22d5322f2afd/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:9a26f18905b8dd5d685d6d07b0cdf98a79f3c7a918906af7cc143ea2e164c8bc", size = 154465, upload-time = "2025-10-14T04:41:28.322Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/89/66/c7a9e1b7429be72123441bfdbaf2bc13faab3f90b933f664db506dea5915/charset_normalizer-3.4.4-cp313-cp313-win32.whl", hash = "sha256:9b35f4c90079ff2e2edc5b26c0c77925e5d2d255c42c74fdb70fb49b172726ac", size = 99404, upload-time = "2025-10-14T04:41:29.95Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/c4/26/b9924fa27db384bdcd97ab83b4f0a8058d96ad9626ead570674d5e737d90/charset_normalizer-3.4.4-cp313-cp313-win_amd64.whl", hash = "sha256:b435cba5f4f750aa6c0a0d92c541fb79f69a387c91e61f1795227e4ed9cece14", size = 107092, upload-time = "2025-10-14T04:41:31.188Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/af/8f/3ed4bfa0c0c72a7ca17f0380cd9e4dd842b09f664e780c13cff1dcf2ef1b/charset_normalizer-3.4.4-cp313-cp313-win_arm64.whl", hash = "sha256:542d2cee80be6f80247095cc36c418f7bddd14f4a6de45af91dfad36d817bba2", size = 100408, upload-time = "2025-10-14T04:41:32.624Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/2a/35/7051599bd493e62411d6ede36fd5af83a38f37c4767b92884df7301db25d/charset_normalizer-3.4.4-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:da3326d9e65ef63a817ecbcc0df6e94463713b754fe293eaa03da99befb9a5bd", size = 207746, upload-time = "2025-10-14T04:41:33.773Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/10/9a/97c8d48ef10d6cd4fcead2415523221624bf58bcf68a802721a6bc807c8f/charset_normalizer-3.4.4-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8af65f14dc14a79b924524b1e7fffe304517b2bff5a58bf64f30b98bbc5079eb", size = 147889, upload-time = "2025-10-14T04:41:34.897Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/10/bf/979224a919a1b606c82bd2c5fa49b5c6d5727aa47b4312bb27b1734f53cd/charset_normalizer-3.4.4-cp314-cp314-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:74664978bb272435107de04e36db5a9735e78232b85b77d45cfb38f758efd33e", size = 143641, upload-time = "2025-10-14T04:41:36.116Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/ba/33/0ad65587441fc730dc7bd90e9716b30b4702dc7b617e6ba4997dc8651495/charset_normalizer-3.4.4-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:752944c7ffbfdd10c074dc58ec2d5a8a4cd9493b314d367c14d24c17684ddd14", size = 160779, upload-time = "2025-10-14T04:41:37.229Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/67/ed/331d6b249259ee71ddea93f6f2f0a56cfebd46938bde6fcc6f7b9a3d0e09/charset_normalizer-3.4.4-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:d1f13550535ad8cff21b8d757a3257963e951d96e20ec82ab44bc64aeb62a191", size = 159035, upload-time = "2025-10-14T04:41:38.368Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/67/ff/f6b948ca32e4f2a4576aa129d8bed61f2e0543bf9f5f2b7fc3758ed005c9/charset_normalizer-3.4.4-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ecaae4149d99b1c9e7b88bb03e3221956f68fd6d50be2ef061b2381b61d20838", size = 152542, upload-time = "2025-10-14T04:41:39.862Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/16/85/276033dcbcc369eb176594de22728541a925b2632f9716428c851b149e83/charset_normalizer-3.4.4-cp314-cp314-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:cb6254dc36b47a990e59e1068afacdcd02958bdcce30bb50cc1700a8b9d624a6", size = 149524, upload-time = "2025-10-14T04:41:41.319Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/9e/f2/6a2a1f722b6aba37050e626530a46a68f74e63683947a8acff92569f979a/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:c8ae8a0f02f57a6e61203a31428fa1d677cbe50c93622b4149d5c0f319c1d19e", size = 150395, upload-time = "2025-10-14T04:41:42.539Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/60/bb/2186cb2f2bbaea6338cad15ce23a67f9b0672929744381e28b0592676824/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_armv7l.whl", hash = "sha256:47cc91b2f4dd2833fddaedd2893006b0106129d4b94fdb6af1f4ce5a9965577c", size = 143680, upload-time = "2025-10-14T04:41:43.661Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/7d/a5/bf6f13b772fbb2a90360eb620d52ed8f796f3c5caee8398c3b2eb7b1c60d/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:82004af6c302b5d3ab2cfc4cc5f29db16123b1a8417f2e25f9066f91d4411090", size = 162045, upload-time = "2025-10-14T04:41:44.821Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/df/c5/d1be898bf0dc3ef9030c3825e5d3b83f2c528d207d246cbabe245966808d/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:2b7d8f6c26245217bd2ad053761201e9f9680f8ce52f0fcd8d0755aeae5b2152", size = 149687, upload-time = "2025-10-14T04:41:46.442Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/a5/42/90c1f7b9341eef50c8a1cb3f098ac43b0508413f33affd762855f67a410e/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_s390x.whl", hash = "sha256:799a7a5e4fb2d5898c60b640fd4981d6a25f1c11790935a44ce38c54e985f828", size = 160014, upload-time = "2025-10-14T04:41:47.631Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/76/be/4d3ee471e8145d12795ab655ece37baed0929462a86e72372fd25859047c/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:99ae2cffebb06e6c22bdc25801d7b30f503cc87dbd283479e7b606f70aff57ec", size = 154044, upload-time = "2025-10-14T04:41:48.81Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/b0/6f/8f7af07237c34a1defe7defc565a9bc1807762f672c0fde711a4b22bf9c0/charset_normalizer-3.4.4-cp314-cp314-win32.whl", hash = "sha256:f9d332f8c2a2fcbffe1378594431458ddbef721c1769d78e2cbc06280d8155f9", size = 99940, upload-time = "2025-10-14T04:41:49.946Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/4b/51/8ade005e5ca5b0d80fb4aff72a3775b325bdc3d27408c8113811a7cbe640/charset_normalizer-3.4.4-cp314-cp314-win_amd64.whl", hash = "sha256:8a6562c3700cce886c5be75ade4a5db4214fda19fede41d9792d100288d8f94c", size = 107104, upload-time = "2025-10-14T04:41:51.051Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/da/5f/6b8f83a55bb8278772c5ae54a577f3099025f9ade59d0136ac24a0df4bde/charset_normalizer-3.4.4-cp314-cp314-win_arm64.whl", hash = "sha256:de00632ca48df9daf77a2c65a484531649261ec9f25489917f09e455cb09ddb2", size = 100743, upload-time = "2025-10-14T04:41:52.122Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/0a/4c/925909008ed5a988ccbb72dcc897407e5d6d3bd72410d69e051fc0c14647/charset_normalizer-3.4.4-py3-none-any.whl", hash = "sha256:7a32c560861a02ff789ad905a2fe94e3f840803362c84fecf1851cb4cf3dc37f", size = 53402, upload-time = "2025-10-14T04:42:31.76Z" },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "click"
|
name = "click"
|
||||||
version = "8.2.1"
|
version = "8.2.1"
|
||||||
@@ -91,7 +132,11 @@ dependencies = [
|
|||||||
{ name = "click" },
|
{ name = "click" },
|
||||||
{ name = "mkdocs" },
|
{ name = "mkdocs" },
|
||||||
{ name = "openai" },
|
{ name = "openai" },
|
||||||
|
{ name = "pika" },
|
||||||
|
{ name = "pillow" },
|
||||||
|
{ name = "pytest" },
|
||||||
{ name = "pyyaml" },
|
{ name = "pyyaml" },
|
||||||
|
{ name = "requests" },
|
||||||
{ name = "tqdm" },
|
{ name = "tqdm" },
|
||||||
{ name = "types-pyyaml" },
|
{ name = "types-pyyaml" },
|
||||||
{ name = "types-tqdm" },
|
{ name = "types-tqdm" },
|
||||||
@@ -102,7 +147,11 @@ requires-dist = [
|
|||||||
{ name = "click", specifier = "==8.2.1" },
|
{ name = "click", specifier = "==8.2.1" },
|
||||||
{ name = "mkdocs", specifier = ">=1.6.1" },
|
{ name = "mkdocs", specifier = ">=1.6.1" },
|
||||||
{ name = "openai", specifier = ">=2.21.0" },
|
{ name = "openai", specifier = ">=2.21.0" },
|
||||||
|
{ name = "pika", specifier = ">=1.3.2" },
|
||||||
|
{ name = "pillow", specifier = ">=12.1.1" },
|
||||||
|
{ name = "pytest", specifier = ">=9.0.2" },
|
||||||
{ name = "pyyaml", specifier = ">=6.0.3" },
|
{ name = "pyyaml", specifier = ">=6.0.3" },
|
||||||
|
{ name = "requests", specifier = ">=2.32.5" },
|
||||||
{ name = "tqdm", specifier = ">=4.67.3" },
|
{ name = "tqdm", specifier = ">=4.67.3" },
|
||||||
{ name = "types-pyyaml", specifier = ">=6.0.12.20250915" },
|
{ name = "types-pyyaml", specifier = ">=6.0.12.20250915" },
|
||||||
{ name = "types-tqdm", specifier = ">=4.67.3.20260205" },
|
{ name = "types-tqdm", specifier = ">=4.67.3.20260205" },
|
||||||
@@ -145,6 +194,15 @@ wheels = [
|
|||||||
{ url = "https://files.pythonhosted.org/packages/0e/61/66938bbb5fc52dbdf84594873d5b51fb1f7c7794e9c0f5bd885f30bc507b/idna-3.11-py3-none-any.whl", hash = "sha256:771a87f49d9defaf64091e6e6fe9c18d4833f140bd19464795bc32d966ca37ea", size = 71008, upload-time = "2025-10-12T14:55:18.883Z" },
|
{ url = "https://files.pythonhosted.org/packages/0e/61/66938bbb5fc52dbdf84594873d5b51fb1f7c7794e9c0f5bd885f30bc507b/idna-3.11-py3-none-any.whl", hash = "sha256:771a87f49d9defaf64091e6e6fe9c18d4833f140bd19464795bc32d966ca37ea", size = 71008, upload-time = "2025-10-12T14:55:18.883Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "iniconfig"
|
||||||
|
version = "2.3.0"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/72/34/14ca021ce8e5dfedc35312d08ba8bf51fdd999c576889fc2c24cb97f4f10/iniconfig-2.3.0.tar.gz", hash = "sha256:c76315c77db068650d49c5b56314774a7804df16fee4402c1f19d6d15d8c4730", size = 20503, upload-time = "2025-10-18T21:55:43.219Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/cb/b1/3846dd7f199d53cb17f49cba7e651e9ce294d8497c8c150530ed11865bb8/iniconfig-2.3.0-py3-none-any.whl", hash = "sha256:f631c04d2c48c52b84d0d0549c99ff3859c98df65b3101406327ecc7d53fbf12", size = 7484, upload-time = "2025-10-18T21:55:41.639Z" },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "jinja2"
|
name = "jinja2"
|
||||||
version = "3.1.6"
|
version = "3.1.6"
|
||||||
@@ -353,6 +411,73 @@ wheels = [
|
|||||||
{ url = "https://files.pythonhosted.org/packages/ef/3c/2c197d226f9ea224a9ab8d197933f9da0ae0aac5b6e0f884e2b8d9c8e9f7/pathspec-1.0.4-py3-none-any.whl", hash = "sha256:fb6ae2fd4e7c921a165808a552060e722767cfa526f99ca5156ed2ce45a5c723", size = 55206, upload-time = "2026-01-27T03:59:45.137Z" },
|
{ url = "https://files.pythonhosted.org/packages/ef/3c/2c197d226f9ea224a9ab8d197933f9da0ae0aac5b6e0f884e2b8d9c8e9f7/pathspec-1.0.4-py3-none-any.whl", hash = "sha256:fb6ae2fd4e7c921a165808a552060e722767cfa526f99ca5156ed2ce45a5c723", size = 55206, upload-time = "2026-01-27T03:59:45.137Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "pika"
|
||||||
|
version = "1.3.2"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/db/db/d4102f356af18f316c67f2cead8ece307f731dd63140e2c71f170ddacf9b/pika-1.3.2.tar.gz", hash = "sha256:b2a327ddddf8570b4965b3576ac77091b850262d34ce8c1d8cb4e4146aa4145f", size = 145029, upload-time = "2023-05-05T14:25:43.368Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/f9/f3/f412836ec714d36f0f4ab581b84c491e3f42c6b5b97a6c6ed1817f3c16d0/pika-1.3.2-py3-none-any.whl", hash = "sha256:0779a7c1fafd805672796085560d290213a465e4f6f76a6fb19e378d8041a14f", size = 155415, upload-time = "2023-05-05T14:25:41.484Z" },
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "pillow"
|
||||||
|
version = "12.1.1"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/1f/42/5c74462b4fd957fcd7b13b04fb3205ff8349236ea74c7c375766d6c82288/pillow-12.1.1.tar.gz", hash = "sha256:9ad8fa5937ab05218e2b6a4cff30295ad35afd2f83ac592e68c0d871bb0fdbc4", size = 46980264, upload-time = "2026-02-11T04:23:07.146Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/d5/11/6db24d4bd7685583caeae54b7009584e38da3c3d4488ed4cd25b439de486/pillow-12.1.1-cp313-cp313-ios_13_0_arm64_iphoneos.whl", hash = "sha256:d242e8ac078781f1de88bf823d70c1a9b3c7950a44cdf4b7c012e22ccbcd8e4e", size = 4062689, upload-time = "2026-02-11T04:21:06.804Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/33/c0/ce6d3b1fe190f0021203e0d9b5b99e57843e345f15f9ef22fcd43842fd21/pillow-12.1.1-cp313-cp313-ios_13_0_arm64_iphonesimulator.whl", hash = "sha256:02f84dfad02693676692746df05b89cf25597560db2857363a208e393429f5e9", size = 4138535, upload-time = "2026-02-11T04:21:08.452Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/a0/c6/d5eb6a4fb32a3f9c21a8c7613ec706534ea1cf9f4b3663e99f0d83f6fca8/pillow-12.1.1-cp313-cp313-ios_13_0_x86_64_iphonesimulator.whl", hash = "sha256:e65498daf4b583091ccbb2556c7000abf0f3349fcd57ef7adc9a84a394ed29f6", size = 3601364, upload-time = "2026-02-11T04:21:10.194Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/14/a1/16c4b823838ba4c9c52c0e6bbda903a3fe5a1bdbf1b8eb4fff7156f3e318/pillow-12.1.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:6c6db3b84c87d48d0088943bf33440e0c42370b99b1c2a7989216f7b42eede60", size = 5262561, upload-time = "2026-02-11T04:21:11.742Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/bb/ad/ad9dc98ff24f485008aa5cdedaf1a219876f6f6c42a4626c08bc4e80b120/pillow-12.1.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:8b7e5304e34942bf62e15184219a7b5ad4ff7f3bb5cca4d984f37df1a0e1aee2", size = 4657460, upload-time = "2026-02-11T04:21:13.786Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/9e/1b/f1a4ea9a895b5732152789326202a82464d5254759fbacae4deea3069334/pillow-12.1.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:18e5bddd742a44b7e6b1e773ab5db102bd7a94c32555ba656e76d319d19c3850", size = 6232698, upload-time = "2026-02-11T04:21:15.949Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/95/f4/86f51b8745070daf21fd2e5b1fe0eb35d4db9ca26e6d58366562fb56a743/pillow-12.1.1-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:fc44ef1f3de4f45b50ccf9136999d71abb99dca7706bc75d222ed350b9fd2289", size = 8041706, upload-time = "2026-02-11T04:21:17.723Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/29/9b/d6ecd956bb1266dd1045e995cce9b8d77759e740953a1c9aad9502a0461e/pillow-12.1.1-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5a8eb7ed8d4198bccbd07058416eeec51686b498e784eda166395a23eb99138e", size = 6346621, upload-time = "2026-02-11T04:21:19.547Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/71/24/538bff45bde96535d7d998c6fed1a751c75ac7c53c37c90dc2601b243893/pillow-12.1.1-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:47b94983da0c642de92ced1702c5b6c292a84bd3a8e1d1702ff923f183594717", size = 7038069, upload-time = "2026-02-11T04:21:21.378Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/94/0e/58cb1a6bc48f746bc4cb3adb8cabff73e2742c92b3bf7a220b7cf69b9177/pillow-12.1.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:518a48c2aab7ce596d3bf79d0e275661b846e86e4d0e7dec34712c30fe07f02a", size = 6460040, upload-time = "2026-02-11T04:21:23.148Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/6c/57/9045cb3ff11eeb6c1adce3b2d60d7d299d7b273a2e6c8381a524abfdc474/pillow-12.1.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:a550ae29b95c6dc13cf69e2c9dc5747f814c54eeb2e32d683e5e93af56caa029", size = 7164523, upload-time = "2026-02-11T04:21:25.01Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/73/f2/9be9cb99f2175f0d4dbadd6616ce1bf068ee54a28277ea1bf1fbf729c250/pillow-12.1.1-cp313-cp313-win32.whl", hash = "sha256:a003d7422449f6d1e3a34e3dd4110c22148336918ddbfc6a32581cd54b2e0b2b", size = 6332552, upload-time = "2026-02-11T04:21:27.238Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/3f/eb/b0834ad8b583d7d9d42b80becff092082a1c3c156bb582590fcc973f1c7c/pillow-12.1.1-cp313-cp313-win_amd64.whl", hash = "sha256:344cf1e3dab3be4b1fa08e449323d98a2a3f819ad20f4b22e77a0ede31f0faa1", size = 7040108, upload-time = "2026-02-11T04:21:29.462Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/d5/7d/fc09634e2aabdd0feabaff4a32f4a7d97789223e7c2042fd805ea4b4d2c2/pillow-12.1.1-cp313-cp313-win_arm64.whl", hash = "sha256:5c0dd1636633e7e6a0afe7bf6a51a14992b7f8e60de5789018ebbdfae55b040a", size = 2453712, upload-time = "2026-02-11T04:21:31.072Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/19/2a/b9d62794fc8a0dd14c1943df68347badbd5511103e0d04c035ffe5cf2255/pillow-12.1.1-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:0330d233c1a0ead844fc097a7d16c0abff4c12e856c0b325f231820fee1f39da", size = 5264880, upload-time = "2026-02-11T04:21:32.865Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/26/9d/e03d857d1347fa5ed9247e123fcd2a97b6220e15e9cb73ca0a8d91702c6e/pillow-12.1.1-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:5dae5f21afb91322f2ff791895ddd8889e5e947ff59f71b46041c8ce6db790bc", size = 4660616, upload-time = "2026-02-11T04:21:34.97Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/f7/ec/8a6d22afd02570d30954e043f09c32772bfe143ba9285e2fdb11284952cd/pillow-12.1.1-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:2e0c664be47252947d870ac0d327fea7e63985a08794758aa8af5b6cb6ec0c9c", size = 6269008, upload-time = "2026-02-11T04:21:36.623Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/3d/1d/6d875422c9f28a4a361f495a5f68d9de4a66941dc2c619103ca335fa6446/pillow-12.1.1-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:691ab2ac363b8217f7d31b3497108fb1f50faab2f75dfb03284ec2f217e87bf8", size = 8073226, upload-time = "2026-02-11T04:21:38.585Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/a1/cd/134b0b6ee5eda6dc09e25e24b40fdafe11a520bc725c1d0bbaa5e00bf95b/pillow-12.1.1-cp313-cp313t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:e9e8064fb1cc019296958595f6db671fba95209e3ceb0c4734c9baf97de04b20", size = 6380136, upload-time = "2026-02-11T04:21:40.562Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/7a/a9/7628f013f18f001c1b98d8fffe3452f306a70dc6aba7d931019e0492f45e/pillow-12.1.1-cp313-cp313t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:472a8d7ded663e6162dafdf20015c486a7009483ca671cece7a9279b512fcb13", size = 7067129, upload-time = "2026-02-11T04:21:42.521Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/1e/f8/66ab30a2193b277785601e82ee2d49f68ea575d9637e5e234faaa98efa4c/pillow-12.1.1-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:89b54027a766529136a06cfebeecb3a04900397a3590fd252160b888479517bf", size = 6491807, upload-time = "2026-02-11T04:21:44.22Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/da/0b/a877a6627dc8318fdb84e357c5e1a758c0941ab1ddffdafd231983788579/pillow-12.1.1-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:86172b0831b82ce4f7877f280055892b31179e1576aa00d0df3bb1bbf8c3e524", size = 7190954, upload-time = "2026-02-11T04:21:46.114Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/83/43/6f732ff85743cf746b1361b91665d9f5155e1483817f693f8d57ea93147f/pillow-12.1.1-cp313-cp313t-win32.whl", hash = "sha256:44ce27545b6efcf0fdbdceb31c9a5bdea9333e664cda58a7e674bb74608b3986", size = 6336441, upload-time = "2026-02-11T04:21:48.22Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/3b/44/e865ef3986611bb75bfabdf94a590016ea327833f434558801122979cd0e/pillow-12.1.1-cp313-cp313t-win_amd64.whl", hash = "sha256:a285e3eb7a5a45a2ff504e31f4a8d1b12ef62e84e5411c6804a42197c1cf586c", size = 7045383, upload-time = "2026-02-11T04:21:50.015Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/a8/c6/f4fb24268d0c6908b9f04143697ea18b0379490cb74ba9e8d41b898bd005/pillow-12.1.1-cp313-cp313t-win_arm64.whl", hash = "sha256:cc7d296b5ea4d29e6570dabeaed58d31c3fea35a633a69679fb03d7664f43fb3", size = 2456104, upload-time = "2026-02-11T04:21:51.633Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/03/d0/bebb3ffbf31c5a8e97241476c4cf8b9828954693ce6744b4a2326af3e16b/pillow-12.1.1-cp314-cp314-ios_13_0_arm64_iphoneos.whl", hash = "sha256:417423db963cb4be8bac3fc1204fe61610f6abeed1580a7a2cbb2fbda20f12af", size = 4062652, upload-time = "2026-02-11T04:21:53.19Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/2d/c0/0e16fb0addda4851445c28f8350d8c512f09de27bbb0d6d0bbf8b6709605/pillow-12.1.1-cp314-cp314-ios_13_0_arm64_iphonesimulator.whl", hash = "sha256:b957b71c6b2387610f556a7eb0828afbe40b4a98036fc0d2acfa5a44a0c2036f", size = 4138823, upload-time = "2026-02-11T04:22:03.088Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/6b/fb/6170ec655d6f6bb6630a013dd7cf7bc218423d7b5fa9071bf63dc32175ae/pillow-12.1.1-cp314-cp314-ios_13_0_x86_64_iphonesimulator.whl", hash = "sha256:097690ba1f2efdeb165a20469d59d8bb03c55fb6621eb2041a060ae8ea3e9642", size = 3601143, upload-time = "2026-02-11T04:22:04.909Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/59/04/dc5c3f297510ba9a6837cbb318b87dd2b8f73eb41a43cc63767f65cb599c/pillow-12.1.1-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:2815a87ab27848db0321fb78c7f0b2c8649dee134b7f2b80c6a45c6831d75ccd", size = 5266254, upload-time = "2026-02-11T04:22:07.656Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/05/30/5db1236b0d6313f03ebf97f5e17cda9ca060f524b2fcc875149a8360b21c/pillow-12.1.1-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:f7ed2c6543bad5a7d5530eb9e78c53132f93dfa44a28492db88b41cdab885202", size = 4657499, upload-time = "2026-02-11T04:22:09.613Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/6f/18/008d2ca0eb612e81968e8be0bbae5051efba24d52debf930126d7eaacbba/pillow-12.1.1-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:652a2c9ccfb556235b2b501a3a7cf3742148cd22e04b5625c5fe057ea3e3191f", size = 6232137, upload-time = "2026-02-11T04:22:11.434Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/70/f1/f14d5b8eeb4b2cd62b9f9f847eb6605f103df89ef619ac68f92f748614ea/pillow-12.1.1-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:d6e4571eedf43af33d0fc233a382a76e849badbccdf1ac438841308652a08e1f", size = 8042721, upload-time = "2026-02-11T04:22:13.321Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/5a/d6/17824509146e4babbdabf04d8171491fa9d776f7061ff6e727522df9bd03/pillow-12.1.1-cp314-cp314-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b574c51cf7d5d62e9be37ba446224b59a2da26dc4c1bb2ecbe936a4fb1a7cb7f", size = 6347798, upload-time = "2026-02-11T04:22:15.449Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/d1/ee/c85a38a9ab92037a75615aba572c85ea51e605265036e00c5b67dfafbfe2/pillow-12.1.1-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a37691702ed687799de29a518d63d4682d9016932db66d4e90c345831b02fb4e", size = 7039315, upload-time = "2026-02-11T04:22:17.24Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/ec/f3/bc8ccc6e08a148290d7523bde4d9a0d6c981db34631390dc6e6ec34cacf6/pillow-12.1.1-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:f95c00d5d6700b2b890479664a06e754974848afaae5e21beb4d83c106923fd0", size = 6462360, upload-time = "2026-02-11T04:22:19.111Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/f6/ab/69a42656adb1d0665ab051eec58a41f169ad295cf81ad45406963105408f/pillow-12.1.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:559b38da23606e68681337ad74622c4dbba02254fc9cb4488a305dd5975c7eeb", size = 7165438, upload-time = "2026-02-11T04:22:21.041Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/02/46/81f7aa8941873f0f01d4b55cc543b0a3d03ec2ee30d617a0448bf6bd6dec/pillow-12.1.1-cp314-cp314-win32.whl", hash = "sha256:03edcc34d688572014ff223c125a3f77fb08091e4607e7745002fc214070b35f", size = 6431503, upload-time = "2026-02-11T04:22:22.833Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/40/72/4c245f7d1044b67affc7f134a09ea619d4895333d35322b775b928180044/pillow-12.1.1-cp314-cp314-win_amd64.whl", hash = "sha256:50480dcd74fa63b8e78235957d302d98d98d82ccbfac4c7e12108ba9ecbdba15", size = 7176748, upload-time = "2026-02-11T04:22:24.64Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/e4/ad/8a87bdbe038c5c698736e3348af5c2194ffb872ea52f11894c95f9305435/pillow-12.1.1-cp314-cp314-win_arm64.whl", hash = "sha256:5cb1785d97b0c3d1d1a16bc1d710c4a0049daefc4935f3a8f31f827f4d3d2e7f", size = 2544314, upload-time = "2026-02-11T04:22:26.685Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/6c/9d/efd18493f9de13b87ede7c47e69184b9e859e4427225ea962e32e56a49bc/pillow-12.1.1-cp314-cp314t-macosx_10_15_x86_64.whl", hash = "sha256:1f90cff8aa76835cba5769f0b3121a22bd4eb9e6884cfe338216e557a9a548b8", size = 5268612, upload-time = "2026-02-11T04:22:29.884Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/f8/f1/4f42eb2b388eb2ffc660dcb7f7b556c1015c53ebd5f7f754965ef997585b/pillow-12.1.1-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:1f1be78ce9466a7ee64bfda57bdba0f7cc499d9794d518b854816c41bf0aa4e9", size = 4660567, upload-time = "2026-02-11T04:22:31.799Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/01/54/df6ef130fa43e4b82e32624a7b821a2be1c5653a5fdad8469687a7db4e00/pillow-12.1.1-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:42fc1f4677106188ad9a55562bbade416f8b55456f522430fadab3cef7cd4e60", size = 6269951, upload-time = "2026-02-11T04:22:33.921Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/a9/48/618752d06cc44bb4aae8ce0cd4e6426871929ed7b46215638088270d9b34/pillow-12.1.1-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:98edb152429ab62a1818039744d8fbb3ccab98a7c29fc3d5fcef158f3f1f68b7", size = 8074769, upload-time = "2026-02-11T04:22:35.877Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/c3/bd/f1d71eb39a72fa088d938655afba3e00b38018d052752f435838961127d8/pillow-12.1.1-cp314-cp314t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d470ab1178551dd17fdba0fef463359c41aaa613cdcd7ff8373f54be629f9f8f", size = 6381358, upload-time = "2026-02-11T04:22:37.698Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/64/ef/c784e20b96674ed36a5af839305f55616f8b4f8aa8eeccf8531a6e312243/pillow-12.1.1-cp314-cp314t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:6408a7b064595afcab0a49393a413732a35788f2a5092fdc6266952ed67de586", size = 7068558, upload-time = "2026-02-11T04:22:39.597Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/73/cb/8059688b74422ae61278202c4e1ad992e8a2e7375227be0a21c6b87ca8d5/pillow-12.1.1-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:5d8c41325b382c07799a3682c1c258469ea2ff97103c53717b7893862d0c98ce", size = 6493028, upload-time = "2026-02-11T04:22:42.73Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/c6/da/e3c008ed7d2dd1f905b15949325934510b9d1931e5df999bb15972756818/pillow-12.1.1-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:c7697918b5be27424e9ce568193efd13d925c4481dd364e43f5dff72d33e10f8", size = 7191940, upload-time = "2026-02-11T04:22:44.543Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/01/4a/9202e8d11714c1fc5951f2e1ef362f2d7fbc595e1f6717971d5dd750e969/pillow-12.1.1-cp314-cp314t-win32.whl", hash = "sha256:d2912fd8114fc5545aa3a4b5576512f64c55a03f3ebcca4c10194d593d43ea36", size = 6438736, upload-time = "2026-02-11T04:22:46.347Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/f3/ca/cbce2327eb9885476b3957b2e82eb12c866a8b16ad77392864ad601022ce/pillow-12.1.1-cp314-cp314t-win_amd64.whl", hash = "sha256:4ceb838d4bd9dab43e06c363cab2eebf63846d6a4aeaea283bbdfd8f1a8ed58b", size = 7182894, upload-time = "2026-02-11T04:22:48.114Z" },
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/ec/d2/de599c95ba0a973b94410477f8bf0b6f0b5e67360eb89bcb1ad365258beb/pillow-12.1.1-cp314-cp314t-win_arm64.whl", hash = "sha256:7b03048319bfc6170e93bd60728a1af51d3dd7704935feb228c4d4faab35d334", size = 2546446, upload-time = "2026-02-11T04:22:50.342Z" },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "platformdirs"
|
name = "platformdirs"
|
||||||
version = "4.9.1"
|
version = "4.9.1"
|
||||||
@@ -362,6 +487,15 @@ wheels = [
|
|||||||
{ url = "https://files.pythonhosted.org/packages/70/77/e8c95e95f1d4cdd88c90a96e31980df7e709e51059fac150046ad67fac63/platformdirs-4.9.1-py3-none-any.whl", hash = "sha256:61d8b967d34791c162d30d60737369cbbd77debad5b981c4bfda1842e71e0d66", size = 21307, upload-time = "2026-02-14T21:02:43.492Z" },
|
{ url = "https://files.pythonhosted.org/packages/70/77/e8c95e95f1d4cdd88c90a96e31980df7e709e51059fac150046ad67fac63/platformdirs-4.9.1-py3-none-any.whl", hash = "sha256:61d8b967d34791c162d30d60737369cbbd77debad5b981c4bfda1842e71e0d66", size = 21307, upload-time = "2026-02-14T21:02:43.492Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "pluggy"
|
||||||
|
version = "1.6.0"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/f9/e2/3e91f31a7d2b083fe6ef3fa267035b518369d9511ffab804f839851d2779/pluggy-1.6.0.tar.gz", hash = "sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3", size = 69412, upload-time = "2025-05-15T12:30:07.975Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746", size = 20538, upload-time = "2025-05-15T12:30:06.134Z" },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pydantic"
|
name = "pydantic"
|
||||||
version = "2.12.5"
|
version = "2.12.5"
|
||||||
@@ -430,6 +564,31 @@ wheels = [
|
|||||||
{ url = "https://files.pythonhosted.org/packages/9f/ed/068e41660b832bb0b1aa5b58011dea2a3fe0ba7861ff38c4d4904c1c1a99/pydantic_core-2.41.5-cp314-cp314t-win_arm64.whl", hash = "sha256:35b44f37a3199f771c3eaa53051bc8a70cd7b54f333531c59e29fd4db5d15008", size = 1974769, upload-time = "2025-11-04T13:42:01.186Z" },
|
{ url = "https://files.pythonhosted.org/packages/9f/ed/068e41660b832bb0b1aa5b58011dea2a3fe0ba7861ff38c4d4904c1c1a99/pydantic_core-2.41.5-cp314-cp314t-win_arm64.whl", hash = "sha256:35b44f37a3199f771c3eaa53051bc8a70cd7b54f333531c59e29fd4db5d15008", size = 1974769, upload-time = "2025-11-04T13:42:01.186Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "pygments"
|
||||||
|
version = "2.19.2"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/b0/77/a5b8c569bf593b0140bde72ea885a803b82086995367bf2037de0159d924/pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887", size = 4968631, upload-time = "2025-06-21T13:39:12.283Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/c7/21/705964c7812476f378728bdf590ca4b771ec72385c533964653c68e86bdc/pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b", size = 1225217, upload-time = "2025-06-21T13:39:07.939Z" },
|
||||||
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "pytest"
|
||||||
|
version = "9.0.2"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
dependencies = [
|
||||||
|
{ name = "colorama", marker = "sys_platform == 'win32'" },
|
||||||
|
{ name = "iniconfig" },
|
||||||
|
{ name = "packaging" },
|
||||||
|
{ name = "pluggy" },
|
||||||
|
{ name = "pygments" },
|
||||||
|
]
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/d1/db/7ef3487e0fb0049ddb5ce41d3a49c235bf9ad299b6a25d5780a89f19230f/pytest-9.0.2.tar.gz", hash = "sha256:75186651a92bd89611d1d9fc20f0b4345fd827c41ccd5c299a868a05d70edf11", size = 1568901, upload-time = "2025-12-06T21:30:51.014Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/3b/ab/b3226f0bd7cdcf710fbede2b3548584366da3b19b5021e74f5bde2a8fa3f/pytest-9.0.2-py3-none-any.whl", hash = "sha256:711ffd45bf766d5264d487b917733b453d917afd2b0ad65223959f59089f875b", size = 374801, upload-time = "2025-12-06T21:30:49.154Z" },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "python-dateutil"
|
name = "python-dateutil"
|
||||||
version = "2.9.0.post0"
|
version = "2.9.0.post0"
|
||||||
@@ -490,6 +649,21 @@ wheels = [
|
|||||||
{ url = "https://files.pythonhosted.org/packages/04/11/432f32f8097b03e3cd5fe57e88efb685d964e2e5178a48ed61e841f7fdce/pyyaml_env_tag-1.1-py3-none-any.whl", hash = "sha256:17109e1a528561e32f026364712fee1264bc2ea6715120891174ed1b980d2e04", size = 4722, upload-time = "2025-05-13T15:23:59.629Z" },
|
{ url = "https://files.pythonhosted.org/packages/04/11/432f32f8097b03e3cd5fe57e88efb685d964e2e5178a48ed61e841f7fdce/pyyaml_env_tag-1.1-py3-none-any.whl", hash = "sha256:17109e1a528561e32f026364712fee1264bc2ea6715120891174ed1b980d2e04", size = 4722, upload-time = "2025-05-13T15:23:59.629Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "requests"
|
||||||
|
version = "2.32.5"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
dependencies = [
|
||||||
|
{ name = "certifi" },
|
||||||
|
{ name = "charset-normalizer" },
|
||||||
|
{ name = "idna" },
|
||||||
|
{ name = "urllib3" },
|
||||||
|
]
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/c9/74/b3ff8e6c8446842c3f5c837e9c3dfcfe2018ea6ecef224c710c85ef728f4/requests-2.32.5.tar.gz", hash = "sha256:dbba0bac56e100853db0ea71b82b4dfd5fe2bf6d3754a8893c3af500cec7d7cf", size = 134517, upload-time = "2025-08-18T20:46:02.573Z" }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/1e/db/4254e3eabe8020b458f1a747140d32277ec7a271daf1d235b70dc0b4e6e3/requests-2.32.5-py3-none-any.whl", hash = "sha256:2462f94637a34fd532264295e186976db0f5d453d1cdd31473c85a6a161affb6", size = 64738, upload-time = "2025-08-18T20:46:00.542Z" },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "six"
|
name = "six"
|
||||||
version = "1.17.0"
|
version = "1.17.0"
|
||||||
|
|||||||
Reference in New Issue
Block a user