Connect with us

Community

OpenAI Has Trained Its LLM To Confess To Bad Behavior

Published

on

[ad_1]

An anonymous reader quotes a report from MIT Technology Review: OpenAI is testing another new way to expose the complicated processes at work inside large language models. Researchers at the company can make an LLM produce what they call a confession, in which the model explains how it carried out a task and (most of the time) owns up to any bad behavior. Figuring out why large language models do what they do — and in particular why they sometimes appear to lie, cheat, and…

[ad_2]

Source link

Continue Reading