Summary
Local AI diagnostics report Ollama as not running and required models as missing.
Problem
In Settings > Local AI diagnostics, running diagnostics shows an Ollama error. The server is reported as not running, the binary is not found, and expected chat/embedding models are missing.
Expected behavior is that diagnostics should either detect and start/repair Ollama correctly, or provide clear setup actions for installing Ollama and downloading required models.
Steps to reproduce:
- Open OpenHuman desktop app.
- Go to Settings > Local AI / diagnostics.
- Click Run Diagnostics.
- Observe Ollama server not running or unreachable at localhost:11434 and required models marked missing.
Environment:
- Desktop app
- Settings > Local AI diagnostics
- Backend: Ollama
- Expected endpoint: http://localhost:11434
- Platform/version unknown
Solution optional
Check Ollama binary detection, configured binary path, bootstrap/resume flow, and model download checks. Improve diagnostics to show one-click repair actions for installing/starting Ollama and downloading missing models.
Acceptance criteria
Related
Screenshot provided showing Ollama Diagnostics with server not running, binary not found, and missing gemma3:1b-it-qat / all-minilm:latest models.
Summary
Local AI diagnostics report Ollama as not running and required models as missing.
Problem
In Settings > Local AI diagnostics, running diagnostics shows an Ollama error. The server is reported as not running, the binary is not found, and expected chat/embedding models are missing.
Expected behavior is that diagnostics should either detect and start/repair Ollama correctly, or provide clear setup actions for installing Ollama and downloading required models.
Steps to reproduce:
Environment:
Solution optional
Check Ollama binary detection, configured binary path, bootstrap/resume flow, and model download checks. Improve diagnostics to show one-click repair actions for installing/starting Ollama and downloading missing models.
Acceptance criteria
Related
Screenshot provided showing Ollama Diagnostics with server not running, binary not found, and missing gemma3:1b-it-qat / all-minilm:latest models.