Self-hosted Providers
This page will help you set up any self-hosted providers with LLM Vision
Ollama
To use Ollama you must set up an Ollama server first. You can download it from here. Once installed you need to download a model to use. A full list of all available models that support vision can be found here. Keep in mind that only models with vision capabilities can be used. The recommended model is gemma3.
ollama run gemma3If your Home Assistant is not running on the same server as Ollama, you need to set the OLLAMA_HOST environment variable.
Linux
Edit the systemd service by calling
systemctl edit ollama.service. This will open an editor.For each environment variable, add a line Environment under section [Service]:
[Service]
Environment="OLLAMA_HOST=0.0.0.0"Save and close the editor.
Reload systemd and restart Ollama
systemctl daemon-reload
systemctl restart ollamaWindows
Quit Ollama from the system tray
Open File Explorer
Right click on This PC and select Properties
Click on Advanced system settings
Select Environment Variables
Under User variables click New
For variable name enter
OLLAMA_HOSTand for value enter0.0.0.0Click OK and start Ollama again from the Start Menu
Open WebUI
Open WebUI is a self-hosted AI platform with a built-in inference engine for databases so you can add knowledge to your LLM which is also available in LLM Vision.
You will need to generate an API key in Settings > Account > JWT Token. In addition you also need to have at least one model (with vision support) added.
During setup you will need to provide:
API key
Model (
model_id)IP address or hostname of your host
Port (default is 3000)
LocalAI
To use LocalAI you need to have a LocalAI server running. You can find the installation instructions here. During setup you'll need to provide the IP address of your machine and port on which LocalAI is running (default is 8000). If you want to use HTTPS to send requests you need to check the HTTPS box.
Last updated