Image Analyzer
Example
Once you have set up at least one provider and have at least one camera in Home Assistant, you can test LLM Vision:
action: llmvision.image_analyzer
data:
max_tokens: 50
provider: 01JAAFDSVEJEBMESBP62QP156T # Pick in UI
message: Describe the image
image_entity:
- camera.front_door # Replace with your camera's entity_id
include_filename: false
Action Reference
The provider
id will not be the same for you. Switch to UI mode and select one of your config entries.
action: llmvision.image_analyzer
data:
model: gpt-4o-mini
message: Describe the image
image_file: |-
/config/www/tmp/front_door.jpg
/config/www/tmp/garage.jpg
image_entity:
- camera.front_door
- camera.garage
target_width: 1280
max_tokens: 1000
provider: 01JAAFDSVEJEBMESBP62QP156T
include_filename: true
Parameter Reference
provider
Yes
The AI provider configuration
model
No
Model used for processing the image(s).
message
Yes
The prompt to send along with the image(s).
remember
No
Remember the analyzed event
false
use_memory
No
Use information stored in memory to provide additional context. Memory must be set up.
false
image_file
No*
The path to the image file(s). Each path must be on a new line.
image_entity
No*
An alternative to image_file
for providing image input.
include_filename
Yes
Whether to include the filename in the request.
false
target_width
No
Width to downscale the image to before encoding.
1280
max_tokens
Yes
The maximum number of response tokens to generate.
1000
generate_title
No
Generate a title. (Used for notifications and remembered events)
false
expose_images
No
Save key frame to /config/media/llmvision/snapshots
. File path is included in response as key_frame
. If used together with remember
images will be deleted after retention_time
set in Timeline. Otherwise this folder will use a lot of disk space!
false
Last updated