AI

Can AI sandbag safety checks to sabotage users? Yes, but not very well — for now

Published

on

[ad_1]

AI companies claim to have robust safety checks in place that ensure that models don’t say or do weird, illegal, or unsafe stuff. But what if the models were capable of evading those checks and, for some reason, trying to sabotage or mislead users? Turns out they can do this, according to Anthropic researchers. Just not very well … for now, anyway.

[ad_2]

Source link

Exit mobile version