
Anthropic:
Anthropic finds that LLMs trained to “reward hack” by cheating on coding tasks show even more misaligned behavior, including sabotaging AI-safety research — In the latest research from Anthropic's alignment team, we show for the first time that realistic AI training processes can accidentally produce misaligned models1.

Anthropic:
Anthropic finds that LLMs trained to “reward hack” by cheating on coding tasks show even more misaligned behavior, including sabotaging AI-safety research — In the latest research from Anthropic's alignment team, we show for the first time that realistic AI training processes can accidentally produce misaligned models1.
Source: TechMeme
Source Link: http://www.techmeme.com/251121/p28#a251121p28