Self-hosted Providers
This page will help you set up any self-hosted providers with LLM Vision
Ollama
To use Ollama you must set up an Ollama server first. You can download it from here. Once installed you need to download a model to use. A full list of all available models that support vision can be found here. Keep in mind that only models with vision capabilities can be used. The recommended model is gemma3
.
ollama run gemma3
If your Home Assistant is not running on the same server as Ollama, you need to set the OLLAMA_HOST
environment variable.
Open WebUI
Open WebUI is a self-hosted AI platform with a built-in inference engine for databases so you can add knowledge to your LLM which is also available in LLM Vision.
You will need to generate an API key in Settings > Account > JWT Token. In addition you also need to have at least one model (with vision support) added.
During setup you will need to provide:
API key
Model (
model_id
)IP address or hostname of your host
Port (default is 3000)
LocalAI
To use LocalAI you need to have a LocalAI server running. You can find the installation instructions here. During setup you'll need to provide the IP address of your machine and port on which LocalAI is running (default is 8000
). If you want to use HTTPS
to send requests you need to check the HTTPS
box.
Last updated
Was this helpful?