A review of “Investigating the consequences of accidentally grading CoT during RL”

·LessWrong··

Last week, OpenAI staff shared an early draft of Investigating the consequences of accidentally grading CoT during RL with Redwood Research staff.To start with, I appreciate them publishing this post. I think it is valuable for AI companies to be transparent about problems like these when they arise. I particularly appreciate them sharing the post with us early, discussing the issues in detail, and modifying it to address our most important criticisms.I think it will be increasingly important fo...

Read full article →

Related Articles

Dirtyfrag: Universal Linux LPE
flipped · Hacker News · 1d ago
A web page that shows you everything the browser told it without asking
mwheelz · Hacker News · 13h ago
DeepSeek 4 Flash local inference engine for Metal
tamnd · Hacker News · 1d ago
An Introduction to Meshtastic
ColinWright · Hacker News · 14h ago
Natural Language Autoencoders: Turning Claude's Thoughts into Text
instagraham · Hacker News · 1d ago