Anthropic Conversation
The Anthropic integrations集成将 Home Assistant 与您的设备、服务等连接和集成。 [Learn more] adds a conversation agent powered by Anthropic
Controlling Home Assistant is done by providing the AI access to the Assist API of Home Assistant. You can control what devices and entities it can access from the exposed entities page. The AI can provide you information about your devices and control them.
Legal note: Individuals and hobbyists are welcome to use the Anthropic API for personal use
This integration does not integrate with sentence triggers.
Prerequisites
- This integration requires an API key to use, which you can generate here.
. - This is a paid service, we advise you to monitor your costs in the Anthropic portal
closely.
Generating an API Key
The Anthropic API key is used to authenticate requests to the Anthropic API. To generate an API key, take the following steps:
- Log in to the Anthropic portal
or sign up for an account. - Enable billing with a valid credit card on the plans page
. - Visit the API Keys page
to retrieve the API key you’ll use to configure the integration.
配置
要将 Anthropic Conversation service 添加到您的 Home Assistant 实例中,请使用此 My 按钮:
手动配置步骤
如果上述 My 按钮不起作用,您也可以手动执行以下步骤:
-
浏览到您的 Home Assistant 实例。
-
转到
设置 > 设备与服务。 -
在右下角,选择
Add Integration 按钮。 -
从列表中选择 Anthropic Conversation。
-
按照屏幕上的说明完成设置。
选项
Anthropic Conversation 的选项可以通过用户界面设置,具体步骤如下:
- 浏览到您的 Home Assistant 实例。
- 转到 设置 > 设备与服务。
- 如果配置了多个 Anthropic Conversation 实例,请选择您想配置的实例。
- 选择集成,然后选择 配置。
Instructions for the AI on how it should respond to your requests. It is written using Home Assistant Templating.
If the model is allowed to interact with Home Assistant. It can only control or provide information about entities that are exposed to it.
If you choose not to use the recommended settings, you can configure the following options:
The model that will complete your prompt. See models
The maximum number of tokens to generate before stopping. Note that our models may stop before reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate. Different models have different maximum values for this parameter. See models