rdune71 commited on
Commit
c659836
·
1 Parent(s): ef600c3

Update system to use remote Ollama host with graceful fallback handling

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -24,6 +24,6 @@ A personal development assistant powered by LLMs.
24
 
25
  ## Deployment
26
 
27
- The application is designed to work in Hugging Face Spaces environment. For local LLM inference, it connects to a remote Ollama instance via ngrok tunnel at . This allows the application to access powerful local models without requiring them to be installed directly in the Space.
28
 
29
  In case the remote Ollama instance is unavailable, the system gracefully falls back to checking a local instance, and handles unavailability by showing appropriate status messages in the UI.
 
24
 
25
  ## Deployment
26
 
27
+ The application is designed to work in Hugging Face Spaces environment. For local LLM inference, it connects to a remote Ollama instance via ngrok tunnel at `https://a877ef1aa487.ngrok-free.app`. This allows the application to access powerful local models without requiring them to be installed directly in the Space.
28
 
29
  In case the remote Ollama instance is unavailable, the system gracefully falls back to checking a local instance, and handles unavailability by showing appropriate status messages in the UI.