6.0 KiB
Ollama
https://github.com/ollama/ollama
Running with Podman
podman network create localai
podman run \
-d \
-v ollama:/root/.ollama \
-p 127.0.0.1:11434:po \
--network localai \
--name ollama \
docker.io/ollama/ollama
# Pull new models
podman container exec ollama ollama pull llama3.2:3b
podman container exec ollama ollama pull llama3.2:1b
podman container exec ollama ollama pull llama3.2-vision:11b
podman container exec ollama ollama pull llava-llama3:8b
podman container exec ollama ollama pull deepseek-coder-v2:16b
podman container exec ollama ollama pull opencoder:8b
podman container exec ollama ollama pull codestral:22b
# Talk to an existing model via cli
podman container exec -it ollama ollama run llama3.2:3b
podman run \
-d \
-p 127.0.0.1:3001:3001 \
--name anything-llm \
--network localai \
--cap-add SYS_ADMIN \
-v anything-llm:/app/server \
-e STORAGE_DIR="/app/server/storage" \
mintplexlabs/anythingllm
Quadlets with Podlet
# Create volume for ollama
mkdir /ollama
podman run --rm ghcr.io/containers/podlet --install --description "Local AI Network" \
podman network create localai
podman run --rm ghcr.io/containers/podlet --install --description "Ollama" \
podman run \
-d \
-v /ollama:/root/.ollama \
-p 127.0.0.1:11434:11434 \
--network localai \
--name ollama \
docker.io/ollama/ollama
export STORAGE_LOCATION=/anything-llm && \
podman run --rm ghcr.io/containers/podlet --install --description "Anything LLM" \
podman run \
-d \
-p 127.0.0.1:3001:3001 \
--name anything-llm \
--network localai \
--cap-add SYS_ADMIN \
-v ${STORAGE_LOCATION}:/app/server/storage \
-v ${STORAGE_LOCATION}/.env:/app/server/.env \
-e STORAGE_DIR="/app/server/storage" \
mintplexlabs/anythingllm
Make sure to add
[Service]
Restart=always
To the service to have them autostart.
Put the generated files in /usr/share/containers/systemd/.
Podman systemd service
See generating AWS credentials
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
./aws/install
# Configure default credentials
aws configure
Open http/s in firewalld:
firewall-cmd --permanent --zone=public --add-service=http
firewall-cmd --permanent --zone=public --add-service=https
firewall-cmd --reload
Here are the detailed instructions for installing and setting up Nginx on Fedora Linux with Certbot using the Route53 DNS challenge to put in front of a service called "Anything LLM" running on port 3001 with WebSockets. The domain will be chatreesept.reeseapps.com.
-
Install Nginx:
dnf install -y nginx -
Start and enable Nginx service:
systemctl enable --now nginx -
Install Certbot and the Route53 DNS plugin:
dnf install -y certbot python3-certbot-dns-route53 -
Request a certificate for your domain using the Route53 DNS challenge:
certbot certonly --dns-route53 -d chatreesept.reeseapps.comFollow the prompts to provide your Route53 credentials and email address.
-
Configure Nginx for your domain: Create a new Nginx configuration file for your domain:
vim /etc/nginx/conf.d/chatreesept.reeseapps.com.confAdd the following configuration to the file:
keepalive_timeout 1h; send_timeout 1h; client_body_timeout 1h; client_header_timeout 1h; proxy_connect_timeout 1h; proxy_read_timeout 1h; proxy_send_timeout 1h; server { listen 80; server_name chatreesept.reeseapps.com; location / { return 301 https://$host$request_uri; } } server { listen 443 ssl; server_name chatreesept.reeseapps.com; ssl_certificate /etc/letsencrypt/live/chatreesept.reeseapps.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/chatreesept.reeseapps.com/privkey.pem; location / { client_max_body_size 50m; proxy_pass http://localhost:3001; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } } -
Test your Nginx configuration for syntax errors:
nginx -tIf there are no errors, reload Nginx to apply the changes:
systemctl reload nginx -
Set up automatic certificate renewal: Add the following line to your crontab to renew the certificate daily:
sudo crontab -eAdd the following line to the end of the file:
0 0 * * * certbot renew --quiet --no-self-upgrade --pre-hook "systemctl stop nginx" --post-hook "systemctl start nginx"
Now, your "Anything LLM" service running on port 3001 with WebSockets is accessible through the domain chatreesept.reeseapps.com with a valid SSL certificate from Let's Encrypt. The certificate will be automatically renewed daily.
Nginx
certbot-3 certonly --dns-route53 -d chatreesept.reeseapps.com
Make sure to add the following timeout configurations to your http block:
server {
# Enable websocket connections for agent protocol.
location ~* ^/api/agent-invocation/(.*) {
proxy_pass http://0.0.0.0:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
listen 80;
server_name [insert FQDN here];
location / {
# Prevent timeouts on long-running requests.
proxy_connect_timeout 605;
proxy_send_timeout 605;
proxy_read_timeout 605;
send_timeout 605;
keepalive_timeout 605;
# Enable readable HTTP Streaming for LLM streamed responses
proxy_buffering off;
proxy_cache off;
# Proxy your locally running service
proxy_pass http://0.0.0.0:3001;
}
}