add TOC to ollama

This commit is contained in:
2024-11-25 10:54:00 -05:00
parent ff71cbacbd
commit 5f1d03349b

View File

@@ -1,5 +1,16 @@
# Ollama # Ollama
- [Ollama](#ollama)
- [Run natively with GPU support](#run-natively-with-gpu-support)
- [Unsticking models stuck in "Stopping"](#unsticking-models-stuck-in-stopping)
- [Run Anything LLM Interface](#run-anything-llm-interface)
- [Anything LLM Quadlet with Podlet](#anything-llm-quadlet-with-podlet)
- [Now with Nginx and Certbot](#now-with-nginx-and-certbot)
- [Custom Models](#custom-models)
- [From Existing Model](#from-existing-model)
- [From Scratch](#from-scratch)
- [Converting to gguf](#converting-to-gguf)
<https://github.com/ollama/ollama> <https://github.com/ollama/ollama>
## Run natively with GPU support ## Run natively with GPU support
@@ -43,6 +54,13 @@ ollama pull nomic-embed-text:137m-v1.5-fp16
Note your ollama instance will be available to podman containers via `http://host.containers.internal:11434` Note your ollama instance will be available to podman containers via `http://host.containers.internal:11434`
## Unsticking models stuck in "Stopping"
```bash
ollama ps | grep -i stopping
pgrep ollama | xargs -I '%' sh -c 'kill %'
```
## Run Anything LLM Interface ## Run Anything LLM Interface
```bash ```bash
@@ -267,6 +285,8 @@ will be automatically renewed daily.
<https://www.gpu-mart.com/blog/import-models-from-huggingface-to-ollama> <https://www.gpu-mart.com/blog/import-models-from-huggingface-to-ollama>
<https://www.hostinger.com/tutorials/ollama-cli-tutorial#Setting_up_Ollama_in_the_CLI>
### From Existing Model ### From Existing Model
```bash ```bash