# Local AI with Anything LLM ## Running with Podman This installs both Local AI and Anything LLM as backend/frontend services. ```bash podman network create localai # Local AI podman run \ -d \ -p 127.0.0.1:8080:8080 \ --network localai \ --name local-ai \ -v /models:/build/models \ quay.io/go-skynet/local-ai:latest-cpu # Anything LLM Interface export STORAGE_LOCATION=/anything-llm && \ mkdir -p $STORAGE_LOCATION && \ touch "$STORAGE_LOCATION/.env" && \ chown -R 1000:1000 $STORAGE_LOCATION && \ podman run \ -d \ -p 127.0.0.1:3001:3001 \ --name anything-llm \ --network localai \ --cap-add SYS_ADMIN \ -v ${STORAGE_LOCATION}:/app/server/storage \ -v ${STORAGE_LOCATION}/.env:/app/server/.env \ -e STORAGE_DIR="/app/server/storage" \ mintplexlabs/anythingllm ``` ### Quadlets with Podlet Note: on Arch Linux the location is `/etc/containers/systemd/`. ```bash podman run --rm ghcr.io/containers/podlet --install --description "Local AI Network" \ podman network create localai podman run --rm ghcr.io/containers/podlet --install --description "Local AI" \ podman run \ -d \ -p 127.0.0.1:8080:8080 \ --network localai \ --name local-ai \ -v /models:/build/models \ quay.io/go-skynet/local-ai:latest-cpu export STORAGE_LOCATION=/anything-llm && \ podman run --rm ghcr.io/containers/podlet --install --description "Anything LLM" \ podman run \ -d \ -p 127.0.0.1:3001:3001 \ --name anything-llm \ --network localai \ --cap-add SYS_ADMIN \ -v ${STORAGE_LOCATION}:/app/server/storage \ -v ${STORAGE_LOCATION}/.env:/app/server/.env \ -e STORAGE_DIR="/app/server/storage" \ docker.io/mintplexlabs/anythingllm ``` Make sure to add ```conf [Service] Restart=always ``` To the service to have them autostart. Put the generated files in `/usr/share/containers/systemd/`. ## Models Example configs can be found here: ### Config ```yaml name: llama-3.2 parameters: model: huggingface/Llama-3.2-3B-Instruct-f16.gguf temperature: 0.6 backend: llama-cpp # Default context size context_size: 8192 threads: 16 ``` ### Chat llama-3.2-3b-instruct:q8_0 ### Code ### Agent llama-3.2-3b-instruct:q8_0 ## Podman systemd service See [generating AWS credentials](cloud/graduated/aws_iam/README.md) ```bash curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip ./aws/install # Configure default credentials aws configure ``` Open http/s in firewalld: ```bash firewall-cmd --permanent --zone=public --add-service=http firewall-cmd --permanent --zone=public --add-service=https firewall-cmd --reload ``` Here are the detailed instructions for installing and setting up Nginx on Fedora Linux with Certbot using the Route53 DNS challenge to put in front of a service called "Anything LLM" running on port 3001 with WebSockets. The domain will be chatreesept.reeseapps.com. 1. Install Nginx: ``` dnf install -y nginx ``` 2. Start and enable Nginx service: ``` systemctl enable --now nginx ``` 3. Install Certbot and the Route53 DNS plugin: ``` dnf install -y certbot python3-certbot-dns-route53 ``` 4. Request a certificate for your domain using the Route53 DNS challenge: ``` certbot certonly --dns-route53 -d chatreesept.reeseapps.com ``` Follow the prompts to provide your Route53 credentials and email address. 5. Configure Nginx for your domain: Create a new Nginx configuration file for your domain: ``` vim /etc/nginx/conf.d/chatreesept.reeseapps.com.conf ``` Add the following configuration to the file: ``` keepalive_timeout 1h; send_timeout 1h; client_body_timeout 1h; client_header_timeout 1h; proxy_connect_timeout 1h; proxy_read_timeout 1h; proxy_send_timeout 1h; server { listen 80; server_name chatreesept.reeseapps.com; location / { return 301 https://$host$request_uri; } } server { listen 443 ssl; server_name chatreesept.reeseapps.com; ssl_certificate /etc/letsencrypt/live/chatreesept.reeseapps.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/chatreesept.reeseapps.com/privkey.pem; location / { client_max_body_size 50m; proxy_pass http://localhost:3001; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } } ``` 6. Test your Nginx configuration for syntax errors: ``` nginx -t ``` If there are no errors, reload Nginx to apply the changes: ``` systemctl reload nginx ``` 7. Set up automatic certificate renewal: Add the following line to your crontab to renew the certificate daily: ``` sudo crontab -e ``` Add the following line to the end of the file: ``` 0 0 * * * certbot renew --quiet --no-self-upgrade --pre-hook "systemctl stop nginx" --post-hook "systemctl start nginx" ``` Now, your "Anything LLM" service running on port 3001 with WebSockets is accessible through the domain chatreesept.reeseapps.com with a valid SSL certificate from Let's Encrypt. The certificate will be automatically renewed daily. ## Nginx ```bash certbot-3 certonly --dns-route53 -d chatreesept.reeseapps.com ``` Make sure to add the following timeout configurations to your http block: ```conf server { # Enable websocket connections for agent protocol. location ~* ^/api/agent-invocation/(.*) { proxy_pass http://0.0.0.0:3001; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; } listen 80; server_name [insert FQDN here]; location / { # Prevent timeouts on long-running requests. proxy_connect_timeout 605; proxy_send_timeout 605; proxy_read_timeout 605; send_timeout 605; keepalive_timeout 605; # Enable readable HTTP Streaming for LLM streamed responses proxy_buffering off; proxy_cache off; # Proxy your locally running service proxy_pass http://0.0.0.0:3001; } } ```