Compare commits
3 Commits
81413c064e
...
395d064c37
| Author | SHA1 | Date | |
|---|---|---|---|
| 395d064c37 | |||
| f030b93d02 | |||
| c48af18a78 |
@@ -6,6 +6,7 @@
|
||||
- [IPV6](#ipv6)
|
||||
- [Install](#install)
|
||||
- [Backups](#backups)
|
||||
- [Maintenace Mode](#maintenace-mode)
|
||||
- [Trusted Proxy](#trusted-proxy)
|
||||
- [Default phone region](#default-phone-region)
|
||||
- [Adding existing files](#adding-existing-files)
|
||||
@@ -13,6 +14,7 @@
|
||||
- [Changing the domain](#changing-the-domain)
|
||||
- [Uninstall](#uninstall)
|
||||
- [Edit QCOW](#edit-qcow)
|
||||
- [Stuck in login screen](#stuck-in-login-screen)
|
||||
|
||||
<https://github.com/nextcloud/all-in-one>
|
||||
|
||||
@@ -89,6 +91,16 @@ Write down the passphrase. Store both at safe place(s).
|
||||
docker exec nextcloud-aio-borgbackup borg key export /mnt/borgbackup/borg/
|
||||
```
|
||||
|
||||
## Maintenace Mode
|
||||
|
||||
```bash
|
||||
docker stop nextcloud-aio-apache
|
||||
docker exec -it -u www-data nextcloud-aio-nextcloud ./occ maintenance:mode --on
|
||||
|
||||
docker start nextcloud-aio-apache
|
||||
docker exec -it -u www-data nextcloud-aio-nextcloud ./occ maintenance:mode --off
|
||||
```
|
||||
|
||||
## Trusted Proxy
|
||||
|
||||
If running with a reverse proxy.
|
||||
@@ -140,3 +152,20 @@ sudo modprobe nbd
|
||||
sudo qemu-nbd -c /dev/nbd0 --read-only /path/to/image.qcow2
|
||||
udisksctl mount -b /dev/nbd0p1
|
||||
```
|
||||
|
||||
## Stuck in login screen
|
||||
|
||||
Check logs at `/var/www/html/data/nextcloud.log` in `nextcloud-aio-nextcloud` container.
|
||||
|
||||
Sometimes this is caused by a broken app or twofactor. try:
|
||||
|
||||
```bash
|
||||
# Disable two factor
|
||||
./occ twofactorauth:state <user>
|
||||
./occ twofactorauth:disable <user> totp
|
||||
```
|
||||
|
||||
```bash
|
||||
# Disable problem app
|
||||
./occ app:disable integration_openai
|
||||
```
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
- [Fedora Kinoite](#fedora-kinoite)
|
||||
- [Podman](#podman)
|
||||
- [Autostarting services with quadlets](#autostarting-services-with-quadlets)
|
||||
- [Libvirt, Qemu, KVM](#libvirt-qemu-kvm)
|
||||
- [Network](#network)
|
||||
- [VLAN Setup with nmcli](#vlan-setup-with-nmcli)
|
||||
@@ -35,6 +36,7 @@
|
||||
- [Frog](#frog)
|
||||
- [CPU-X](#cpu-x)
|
||||
- [Ungoogled Chromium](#ungoogled-chromium)
|
||||
- [Signal](#signal)
|
||||
- [Toolbox](#toolbox)
|
||||
- [Reese's Arch Toolbox](#reeses-arch-toolbox)
|
||||
- [Pipx/Poetry](#pipxpoetry)
|
||||
@@ -57,6 +59,26 @@ export REGISTRY_AUTH_FILE=$HOME/.podman-auth.json
|
||||
|
||||
Source that and then run `podman login` to create the file.
|
||||
|
||||
### Autostarting services with quadlets
|
||||
|
||||
If you want to run something as your user at boot (like a systemd process, think ollama) you can
|
||||
create a user quadlets like so:
|
||||
|
||||
```bash
|
||||
# Generate the .container file
|
||||
podman run --rm ghcr.io/containers/podlet --install --description "Local AI" \
|
||||
podman run \
|
||||
-d \
|
||||
-v ollama:/root/.ollama \
|
||||
-p 11434:11434 \
|
||||
--name ollama \
|
||||
docker.io/ollama/ollama > ~/.config/systemd/user/ollama.service
|
||||
|
||||
# Start the service
|
||||
systemctl --user daemon-reload
|
||||
systemctl --user start ollama
|
||||
```
|
||||
|
||||
## Libvirt, Qemu, KVM
|
||||
|
||||
```bash
|
||||
@@ -409,14 +431,36 @@ cp /var/lib/flatpak/exports/share/applications/io.github.ungoogled_software.ungo
|
||||
sed \
|
||||
-ri 's/(Exec=.* run )(.*)(io.github.ungoogled_software.ungoogled_chromium)/\1--socket=wayland \2\3 --ozone-platform=wayland/g' \
|
||||
~/.local/share/applications/io.github.ungoogled_software.ungoogled_chromium.desktop
|
||||
```
|
||||
|
||||
```bash
|
||||
# Validate and reload desktop database
|
||||
desktop-file-validate ~/.local/share/applications/io.github.ungoogled_software.ungoogled_chromium.desktop
|
||||
update-desktop-database
|
||||
```
|
||||
|
||||
### Signal
|
||||
|
||||
Chat when Nextcloud Talk is down
|
||||
|
||||
```bash
|
||||
flatpak install org.signal.Signal
|
||||
|
||||
# Fix wayland scaling issues
|
||||
cp /var/lib/flatpak/exports/share/applications/org.signal.Signal.desktop ~/.local/share/applications
|
||||
# Add `--socket=wayland` and `--ozone-platform=wayland` to the correct locations in the "Exec" field
|
||||
sed \
|
||||
-ri 's/(Exec=.* run )(.*)(org.signal.Signal)/\1--socket=wayland \2\3 --ozone-platform=wayland/g' \
|
||||
~/.local/share/applications/org.signal.Signal.desktop
|
||||
|
||||
# Validate and reload desktop database
|
||||
desktop-file-validate ~/.local/share/applications/org.signal.Signal.desktop
|
||||
update-desktop-database
|
||||
```
|
||||
|
||||
```bash
|
||||
# Autostart
|
||||
ln -s ~/.local/share/applications/org.signal.Signal.desktop ~/.config/autostart/
|
||||
```
|
||||
|
||||
## Toolbox
|
||||
|
||||
```bash
|
||||
|
||||
@@ -123,15 +123,15 @@ mkdir -p $MODEL_DIR
|
||||
|
||||
# LOCALAI_SINGLE_ACTIVE_BACKEND will unload the previous model before loading the next one
|
||||
# LOCALAI_API_KEY will set an API key, omit to run unprotected.
|
||||
# HF_TOKEN will set a login token for Hugging Face
|
||||
# Good for single-gpu systems.
|
||||
# Use the below to generate a quadlet for /etc/containers/systemd/local-ai.container
|
||||
# podman run --rm ghcr.io/containers/podlet --install --description "Local AI" \
|
||||
podman run \
|
||||
-d \
|
||||
-p 8080:8080 \
|
||||
-e LOCALAI_SINGLE_ACTIVE_BACKEND=true \
|
||||
-e HF_TOKEN=$(cat ~/.cache/huggingface/token) \
|
||||
-e LOCALAI_API_KEY=$(cat ~/.localai/token) \
|
||||
-e LOCALAI_SINGLE_ACTIVE_BACKEND=true \
|
||||
--device /dev/dri \
|
||||
--device /dev/kfd \
|
||||
--name local-ai \
|
||||
@@ -146,12 +146,11 @@ quay.io/go-skynet/local-ai:master-hipblas-ffmpeg
|
||||
podman run \
|
||||
-d \
|
||||
-p 8081:8080 \
|
||||
-e HF_TOKEN=$(cat ~/.cache/huggingface/token) \
|
||||
--name local-ai-webui \
|
||||
--network local-ai \
|
||||
-v $MODEL_DIR:/build/models \
|
||||
-v localai-tmp:/tmp/generated \
|
||||
quay.io/go-skynet/local-ai:master-ffmpeg
|
||||
quay.io/go-skynet/local-ai:master-hipblas-ffmpeg
|
||||
```
|
||||
|
||||
## Running Anything LLM
|
||||
|
||||
@@ -6,6 +6,7 @@
|
||||
- [Unsticking models stuck in "Stopping"](#unsticking-models-stuck-in-stopping)
|
||||
- [Run Anything LLM Interface](#run-anything-llm-interface)
|
||||
- [Installing External Service with Nginx and Certbot](#installing-external-service-with-nginx-and-certbot)
|
||||
- [Ollama Models](#ollama-models)
|
||||
- [Custom Models](#custom-models)
|
||||
- [From Existing Model](#from-existing-model)
|
||||
- [From Scratch](#from-scratch)
|
||||
@@ -48,7 +49,15 @@ Note your ollama instance will be available to podman containers via `http://hos
|
||||
## Install and run Ollama with Podman
|
||||
|
||||
```bash
|
||||
# AMD
|
||||
# Use the below to generate a quadlet for /etc/containers/systemd/local-ai.container
|
||||
# podman run --rm ghcr.io/containers/podlet --install --description "Local AI" \
|
||||
podman run -d --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama docker.io/ollama/ollama:rocm
|
||||
|
||||
# CPU
|
||||
# Use the below to generate a quadlet for /etc/containers/systemd/local-ai.container
|
||||
# podman run --rm ghcr.io/containers/podlet --install --description "Local AI" \
|
||||
podman run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama docker.io/ollama/ollama
|
||||
```
|
||||
|
||||
## Unsticking models stuck in "Stopping"
|
||||
@@ -245,6 +254,10 @@ Also consider that podman will not restart your containers at boot. You'll need
|
||||
from the podman run commands. Check out the comments above the podman run commands for more info.
|
||||
Also search the web for "podman quadlets" or ask your AI about it!
|
||||
|
||||
## Ollama Models
|
||||
|
||||
<https://ollama.com/library>
|
||||
|
||||
## Custom Models
|
||||
|
||||
<https://www.gpu-mart.com/blog/import-models-from-huggingface-to-ollama>
|
||||
|
||||
Reference in New Issue
Block a user