# Stream Analyzer

### Action Reference

If you want to analyze your live camera feed from one or multiple cameras you can call `llmvision.stream_analyzer`.

{% code title="Action Reference" %}

```yaml
service: llmvision.stream_analyzer
data:
  provider: 01J99F4T99PA1XGQ4CTQS3CP8H  # Select in UI dropdown
  model: gpt-5-mini
  message: What is happening around the house?
  max_tokens: 1000
  image_entity: |-
    camera.front_door
    camera.garage
  duration: 10 # Record for 10 seconds
  max_frames: 5 # Analyze the 5 most relevant frames
  target_width: 1280
  include_filename: true # Include camera name in request
```

{% endcode %}

{% hint style="warning" %}
The `provider` id will not be the same for you. Switch to UI mode and select one of your configurations. If you don't see any, you need to set up at least one provider!
{% endhint %}

{% hint style="info" %}
For all available models see: [Choosing the right model](/getting-started/choosing-the-right-model.md)
{% endhint %}

### Parameter Reference

<table><thead><tr><th width="161.265625">Parameter</th><th width="92.0625">Required</th><th width="345.5546875">Description</th><th width="98.06640625">Default</th></tr></thead><tbody><tr><td><code>provider</code></td><td>Yes</td><td>The AI provider configuration</td><td></td></tr><tr><td><code>model</code></td><td>No</td><td>Model used for processing the image(s).</td><td></td></tr><tr><td><code>message</code></td><td>Yes</td><td>The prompt to send along with the image(s).</td><td></td></tr><tr><td><code>store_in_timeline</code></td><td>No</td><td>Add event to Timeline</td><td><code>false</code></td></tr><tr><td><code>use_memory</code></td><td>No</td><td>Use information stored in memory to provide additional context. Memory must be set up.</td><td><code>false</code></td></tr><tr><td><code>image_entity</code></td><td>Yes</td><td>Camera entity to stream</td><td></td></tr><tr><td><code>duration</code></td><td>Yes</td><td>For how many seconds to capture and analyze stream.</td><td>5</td></tr><tr><td><code>max_frames</code></td><td>No</td><td>How many frames to analyze. Will pick the most relevant frames (most motion)</td><td>3</td></tr><tr><td><code>include_filename</code></td><td>Yes</td><td>Include camera name in request</td><td><code>false</code></td></tr><tr><td><code>target_width</code></td><td>No</td><td>Width to downscale the image to before encoding.</td><td>1280</td></tr><tr><td><code>max_tokens</code></td><td>Yes</td><td>The maximum number of response tokens to generate.</td><td>1000</td></tr><tr><td><code>generate_title</code></td><td>No</td><td>Generate a title and return it in the response. (Used for notifications and remembered events)</td><td><code>false</code></td></tr><tr><td><code>expose_images</code></td><td>No</td><td>Save key frame to <code>/config/media/llmvision/snapshots</code>. File path is included in response as <code>key_frame</code>. If used together with <code>remember</code> images will be deleted after <code>retention_time</code> set in Timeline. Otherwise this folder will use a lot of disk space!</td><td><code>false</code></td></tr><tr><td>response_format</td><td>No</td><td>Format of the response - text for natural language or json for structured data</td><td>text</td></tr><tr><td>structure</td><td>No</td><td>JSON schema defining the expected response structure (only used when <code>response_format</code> is json). See &#x3C;here> for more information.</td><td>see here</td></tr><tr><td>title_field</td><td>No</td><td>Name of the field in your JSON schema that contains the event title (used for timeline). Leave empty to use fallback title "Motion detected".</td><td>title</td></tr><tr><td>description_field</td><td>No</td><td>Name of the field in your JSON schema that contains the event description (used for timeline).</td><td>description</td></tr></tbody></table>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://llmvision.gitbook.io/getting-started/usage/stream-analyzer.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
