AI

Anthropic’s Claude 4 gets feature to cut off abusive user interactions

Published

on

[ad_1]

Anthropic on Friday announced a new safeguard for its Claude 4 family of AI agents, Opus 4 and 4.1, designed to terminate conversations in consumer chat interfaces when users engage in abusive or harmful behaviour. In a blog post, the company said the feature is meant for “rare, extreme cases of persistently harmful or abusive user interactions.”

How it works

When Claude ends a conversation, the user can no longer send new messages in that thread. Other chats on the account remain active, allowing the user to start fresh conversations. To prevent…

[ad_2]

Source link

You must be logged in to post a comment Login

Leave a Reply

Cancel reply

Exit mobile version