various updates across several ai/vm related systems
This commit is contained in:
@@ -6,6 +6,7 @@
|
||||
- [Unsticking models stuck in "Stopping"](#unsticking-models-stuck-in-stopping)
|
||||
- [Run Anything LLM Interface](#run-anything-llm-interface)
|
||||
- [Installing External Service with Nginx and Certbot](#installing-external-service-with-nginx-and-certbot)
|
||||
- [Ollama Models](#ollama-models)
|
||||
- [Custom Models](#custom-models)
|
||||
- [From Existing Model](#from-existing-model)
|
||||
- [From Scratch](#from-scratch)
|
||||
@@ -48,7 +49,15 @@ Note your ollama instance will be available to podman containers via `http://hos
|
||||
## Install and run Ollama with Podman
|
||||
|
||||
```bash
|
||||
# AMD
|
||||
# Use the below to generate a quadlet for /etc/containers/systemd/local-ai.container
|
||||
# podman run --rm ghcr.io/containers/podlet --install --description "Local AI" \
|
||||
podman run -d --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama docker.io/ollama/ollama:rocm
|
||||
|
||||
# CPU
|
||||
# Use the below to generate a quadlet for /etc/containers/systemd/local-ai.container
|
||||
# podman run --rm ghcr.io/containers/podlet --install --description "Local AI" \
|
||||
podman run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama docker.io/ollama/ollama
|
||||
```
|
||||
|
||||
## Unsticking models stuck in "Stopping"
|
||||
@@ -245,6 +254,10 @@ Also consider that podman will not restart your containers at boot. You'll need
|
||||
from the podman run commands. Check out the comments above the podman run commands for more info.
|
||||
Also search the web for "podman quadlets" or ask your AI about it!
|
||||
|
||||
## Ollama Models
|
||||
|
||||
<https://ollama.com/library>
|
||||
|
||||
## Custom Models
|
||||
|
||||
<https://www.gpu-mart.com/blog/import-models-from-huggingface-to-ollama>
|
||||
|
||||
Reference in New Issue
Block a user