# Self-hosted Providers

## Ollama

To use Ollama you must set up an Ollama server first. You can download it from [here](https://ollama.com/). Once installed you need to download a model to use. A full list of all available models that support vision can be found [here](https://ollama.com/search?c=vision). Keep in mind that only models with vision capabilities can be used. The recommended model is `gemma3`.

```
ollama run gemma3
```

If your Home Assistant is **not** running on the same server as Ollama, you need to set the `OLLAMA_HOST` environment variable.

<details>

<summary>Linux</summary>

1. Edit the systemd service by calling `systemctl edit ollama.service`. This will open an editor.
2. For each environment variable, add a line Environment under section \[Service]:

```
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
```

3. Save and close the editor.
4. Reload systemd and restart Ollama

```
systemctl daemon-reload
systemctl restart ollama
```

</details>

<details>

<summary>Windows</summary>

1. Quit Ollama from the system tray
2. Open File Explorer
3. Right click on This PC and select Properties
4. Click on Advanced system settings
5. Select Environment Variables
6. Under User variables click New
7. For variable name enter `OLLAMA_HOST` and for value enter `0.0.0.0`
8. Click OK and start Ollama again from the Start Menu

</details>

<details>

<summary>macOS</summary>

1. Open Terminal
2. Run the following command

```
launchctl setenv OLLAMA_HOST "0.0.0.0"
```

3. Restart Ollama

</details>

<details>

<summary><strong>Models</strong></summary>

See [Ollama model library](https://ollama.com/search?c=vision) for all models.

**Recommended Models**

**`gemma3`**

`qwen2.5vl`

</details>

## Open WebUI

Open WebUI is a self-hosted AI platform with a built-in inference engine for databases so you can add knowledge to your LLM which is also available in LLM Vision.

You will need to generate an API key in Settings > Account > JWT Token. In addition you also need to have at least one model (with vision support) added.

During setup you will need to provide:

* API key
* Model (`model_id`)
* IP address or hostname of your host
* Port (default is 3000)

## LocalAI

To use LocalAI you need to have a LocalAI server running. You can find the installation instructions [here](https://localai.io/basics/getting_started/). During setup you'll need to provide the IP address of your machine and port on which LocalAI is running (default is `8000`). If you want to use `HTTPS` to send requests you need to check the `HTTPS` box.
