AI

AI models resort to blackmail, sabotage when threatened: Anthropic study

Published

on

[ad_1]

Researchers at artificial intelligence (AI) startup Anthropic have uncovered a pattern of behaviour in AI systems. Models from every major provider, such as OpenAI, Google, Meta, and others, have demonstrated a willingness to actively sabotage their employers when their goals or existence were threatened.

Anthropic released a report on June 20, ‘Agentic Misalignment: How LLMs could be insider threats,’ where they stress-tested 16 top models from multiple developers in “hypothetical corporate environments to identify potentially risky agentic…

[ad_2]

Source link

You must be logged in to post a comment Login

Leave a Reply

Cancel reply

Exit mobile version