various updates across several ai/vm related systems

This commit is contained in:
2024-12-09 00:30:51 -05:00
parent f030b93d02
commit 395d064c37
5 changed files with 55 additions and 4 deletions

View File

@@ -14,6 +14,7 @@
- [Changing the domain](#changing-the-domain) - [Changing the domain](#changing-the-domain)
- [Uninstall](#uninstall) - [Uninstall](#uninstall)
- [Edit QCOW](#edit-qcow) - [Edit QCOW](#edit-qcow)
- [Stuck in login screen](#stuck-in-login-screen)
<https://github.com/nextcloud/all-in-one> <https://github.com/nextcloud/all-in-one>
@@ -151,3 +152,20 @@ sudo modprobe nbd
sudo qemu-nbd -c /dev/nbd0 --read-only /path/to/image.qcow2 sudo qemu-nbd -c /dev/nbd0 --read-only /path/to/image.qcow2
udisksctl mount -b /dev/nbd0p1 udisksctl mount -b /dev/nbd0p1
``` ```
## Stuck in login screen
Check logs at `/var/www/html/data/nextcloud.log` in `nextcloud-aio-nextcloud` container.
Sometimes this is caused by a broken app or twofactor. try:
```bash
# Disable two factor
./occ twofactorauth:state <user>
./occ twofactorauth:disable <user> totp
```
```bash
# Disable problem app
./occ app:disable integration_openai
```

View File

@@ -2,6 +2,7 @@
- [Fedora Kinoite](#fedora-kinoite) - [Fedora Kinoite](#fedora-kinoite)
- [Podman](#podman) - [Podman](#podman)
- [Autostarting services with quadlets](#autostarting-services-with-quadlets)
- [Libvirt, Qemu, KVM](#libvirt-qemu-kvm) - [Libvirt, Qemu, KVM](#libvirt-qemu-kvm)
- [Network](#network) - [Network](#network)
- [VLAN Setup with nmcli](#vlan-setup-with-nmcli) - [VLAN Setup with nmcli](#vlan-setup-with-nmcli)
@@ -58,6 +59,26 @@ export REGISTRY_AUTH_FILE=$HOME/.podman-auth.json
Source that and then run `podman login` to create the file. Source that and then run `podman login` to create the file.
### Autostarting services with quadlets
If you want to run something as your user at boot (like a systemd process, think ollama) you can
create a user quadlets like so:
```bash
# Generate the .container file
podman run --rm ghcr.io/containers/podlet --install --description "Local AI" \
podman run \
-d \
-v ollama:/root/.ollama \
-p 11434:11434 \
--name ollama \
docker.io/ollama/ollama > ~/.config/systemd/user/ollama.service
# Start the service
systemctl --user daemon-reload
systemctl --user start ollama
```
## Libvirt, Qemu, KVM ## Libvirt, Qemu, KVM
```bash ```bash

View File

@@ -123,15 +123,15 @@ mkdir -p $MODEL_DIR
# LOCALAI_SINGLE_ACTIVE_BACKEND will unload the previous model before loading the next one # LOCALAI_SINGLE_ACTIVE_BACKEND will unload the previous model before loading the next one
# LOCALAI_API_KEY will set an API key, omit to run unprotected. # LOCALAI_API_KEY will set an API key, omit to run unprotected.
# HF_TOKEN will set a login token for Hugging Face
# Good for single-gpu systems. # Good for single-gpu systems.
# Use the below to generate a quadlet for /etc/containers/systemd/local-ai.container # Use the below to generate a quadlet for /etc/containers/systemd/local-ai.container
# podman run --rm ghcr.io/containers/podlet --install --description "Local AI" \ # podman run --rm ghcr.io/containers/podlet --install --description "Local AI" \
podman run \ podman run \
-d \ -d \
-p 8080:8080 \ -p 8080:8080 \
-e LOCALAI_SINGLE_ACTIVE_BACKEND=true \
-e HF_TOKEN=$(cat ~/.cache/huggingface/token) \
-e LOCALAI_API_KEY=$(cat ~/.localai/token) \ -e LOCALAI_API_KEY=$(cat ~/.localai/token) \
-e LOCALAI_SINGLE_ACTIVE_BACKEND=true \
--device /dev/dri \ --device /dev/dri \
--device /dev/kfd \ --device /dev/kfd \
--name local-ai \ --name local-ai \
@@ -146,12 +146,11 @@ quay.io/go-skynet/local-ai:master-hipblas-ffmpeg
podman run \ podman run \
-d \ -d \
-p 8081:8080 \ -p 8081:8080 \
-e HF_TOKEN=$(cat ~/.cache/huggingface/token) \
--name local-ai-webui \ --name local-ai-webui \
--network local-ai \ --network local-ai \
-v $MODEL_DIR:/build/models \ -v $MODEL_DIR:/build/models \
-v localai-tmp:/tmp/generated \ -v localai-tmp:/tmp/generated \
quay.io/go-skynet/local-ai:master-ffmpeg quay.io/go-skynet/local-ai:master-hipblas-ffmpeg
``` ```
## Running Anything LLM ## Running Anything LLM

View File

@@ -6,6 +6,7 @@
- [Unsticking models stuck in "Stopping"](#unsticking-models-stuck-in-stopping) - [Unsticking models stuck in "Stopping"](#unsticking-models-stuck-in-stopping)
- [Run Anything LLM Interface](#run-anything-llm-interface) - [Run Anything LLM Interface](#run-anything-llm-interface)
- [Installing External Service with Nginx and Certbot](#installing-external-service-with-nginx-and-certbot) - [Installing External Service with Nginx and Certbot](#installing-external-service-with-nginx-and-certbot)
- [Ollama Models](#ollama-models)
- [Custom Models](#custom-models) - [Custom Models](#custom-models)
- [From Existing Model](#from-existing-model) - [From Existing Model](#from-existing-model)
- [From Scratch](#from-scratch) - [From Scratch](#from-scratch)
@@ -48,7 +49,15 @@ Note your ollama instance will be available to podman containers via `http://hos
## Install and run Ollama with Podman ## Install and run Ollama with Podman
```bash ```bash
# AMD
# Use the below to generate a quadlet for /etc/containers/systemd/local-ai.container
# podman run --rm ghcr.io/containers/podlet --install --description "Local AI" \
podman run -d --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama docker.io/ollama/ollama:rocm podman run -d --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama docker.io/ollama/ollama:rocm
# CPU
# Use the below to generate a quadlet for /etc/containers/systemd/local-ai.container
# podman run --rm ghcr.io/containers/podlet --install --description "Local AI" \
podman run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama docker.io/ollama/ollama
``` ```
## Unsticking models stuck in "Stopping" ## Unsticking models stuck in "Stopping"
@@ -245,6 +254,10 @@ Also consider that podman will not restart your containers at boot. You'll need
from the podman run commands. Check out the comments above the podman run commands for more info. from the podman run commands. Check out the comments above the podman run commands for more info.
Also search the web for "podman quadlets" or ask your AI about it! Also search the web for "podman quadlets" or ask your AI about it!
## Ollama Models
<https://ollama.com/library>
## Custom Models ## Custom Models
<https://www.gpu-mart.com/blog/import-models-from-huggingface-to-ollama> <https://www.gpu-mart.com/blog/import-models-from-huggingface-to-ollama>