Anthropic Conversation

The Anthropic integrations集成将 Home Assistant 与您的设备、服务等连接和集成。 [Learn more] adds a conversation agent powered by Anthropic, such as Claude 3.5 Sonnet, in Home Assistant.

Controlling Home Assistant is done by providing the AI access to the Assist API of Home Assistant. You can control what devices and entities it can access from the exposed entities page. The AI can provide you information about your devices and control them.

Legal note: Individuals and hobbyists are welcome to use the Anthropic API for personal use, however, please note that the use of the API is subject to their Commercial Terms of Service, regardless of whether you are an individual or representing a company.

This integration does not integrate with sentence triggers.

Prerequisites

Generating an API Key

The Anthropic API key is used to authenticate requests to the Anthropic API. To generate an API key, take the following steps:

  1. Log in to the Anthropic portal or sign up for an account.
  2. Enable billing with a valid credit card on the plans page.
  3. Visit the API Keys page to retrieve the API key you’ll use to configure the integration.

配置

要将 Anthropic Conversation service 添加到您的 Home Assistant 实例中,请使用此 My 按钮:

手动配置步骤

如果上述 My 按钮不起作用,您也可以手动执行以下步骤:

  • 浏览到您的 Home Assistant 实例。

  • 转到 设置 > 设备与服务

  • 在右下角,选择 Add Integration 按钮。

  • 从列表中选择 Anthropic Conversation

  • 按照屏幕上的说明完成设置。

选项

Anthropic Conversation 的选项可以通过用户界面设置,具体步骤如下:

  • 浏览到您的 Home Assistant 实例。
  • 转到 设置 > 设备与服务
  • 如果配置了多个 Anthropic Conversation 实例,请选择您想配置的实例。
  • 选择集成,然后选择 配置
Instructions

Instructions for the AI on how it should respond to your requests. It is written using Home Assistant Templating.

Control Home Assistant

If the model is allowed to interact with Home Assistant. It can only control or provide information about entities that are exposed to it.

Recommended settings

If enabled, the recommended model and settings are chosen.

If you choose not to use the recommended settings, you can configure the following options:

Model

The model that will complete your prompt. See models for additional details and options.

Maximum Tokens to Return in Response

The maximum number of tokens to generate before stopping. Note that our models may stop before reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate. Different models have different maximum values for this parameter. See models for details.

Temperature

Amount of randomness injected into the response. Use temperature closer to 0.0 for analytical / multiple choice, and closer to 1.0 for creative and generative tasks. Note that even with temperature of 0.0, the results will not be fully deterministic.