LLM Vision | Getting Started
  • Introduction
  • Installation
  • Setup
    • Providers
      • Cloud Providers
      • Self-hosted Providers
      • Memory
      • Timeline
    • Timeline Card
    • Blueprint
    • Asking about Events
  • Usage
    • Image Analyzer
    • Video Analyzer
    • Stream Analyzer
    • Data Analyzer
    • Remember
  • Choosing the right model
Powered by GitBook
On this page
  • Cloud based Providers
  • Self-hosted Providers
  • Integration Providers
  • Setup
  • Using Providers in Actions

Was this helpful?

  1. Setup

Providers

PreviousSetupNextCloud Providers

Last updated 1 month ago

Was this helpful?

LLM Vision combines multiple AI providers into an easy to use integration. These AI providers are supported:

If you are unsure which model to use, there is a comparison available here:

Cloud based Providers

Easy to set up and blazingly fast

Self-hosted Providers

Achieve maximum privacy by hosting LLMs on a local machine

Integration Providers

These providers can not handle AI requests but provide additional context or settings

Setup

Each provider is slightly different but most will require and API key. Self-hosted providers need a base url and port.

Providers can also be reconfigured if you need to change anything later!

To setup your first provider:

  1. Navigate to Devices & Services in the Home Assistant settings

  2. Add Integration

  3. Search for 'LLM Vision'

  4. Select the provider you want to set up from the dropdown

  5. Enter all required details

On success you will see LLM Vision in your integrations. From here you can set up new providers or delete existing ones.

Using Providers in Actions

You can have multiple different configurations per provider. This is especially useful for local providers, for example when you have two machines hosting different models.

When running an action, you can select one of your provider configurations:

Pay as you go

Supported Models

gpt-4ogpt-4o-mini
Pay as you go

Supported Models

claude3.5-sonnetclaude3.5-haiku
FreePay as you go

Supported Models

gemini-2.0gemini-1.5
Free

Supported Models

llama3.2-vision
Pay as you go

Amazon Novaclaude3.5-sonnetclaude3.5-haiku
Self-hosted

Supported Models

gemma3llama3.2-visionminicpm-vllava1.6
Self-hosted

Supported Models

gemma3llama3.2-visionminicpm-vllava1.6
Choosing the right model

Timeline

Stores events to build a timeline of events. Exposes the calendar entity needed for the Timeline Card.

timeline

OpenAI
Anthropic
Google
Groq
AWS Bedrock
Open WebUI
LocalAI

Memory

Stores reference images along with descriptions to provide additional context to the model. Syncs across all providers.

memorysystem prompttitle prompt
Multiple configurations for Ollama with different hosts
Selecting a provider
Self-hosted

Supported Models

gemma3llama3.2-visionminicpm-vllava1.6

Ollama