in

Scaling legal guidelines for reward mannequin overoptimization


In reinforcement studying from human suggestions, it is not uncommon to optimize towards a reward mannequin skilled to foretell human preferences. As a result of the reward mannequin is an imperfect proxy, optimizing its worth an excessive amount of can hinder floor reality efficiency, in accordance with Goodhart’s legislation. This impact has been continuously noticed, however not rigorously measured as a result of expense of amassing human desire knowledge. On this work, we use an artificial setup wherein a set “gold-standard” reward mannequin performs the function of people, offering labels used to coach a proxy reward mannequin. We examine how the gold reward mannequin rating modifications as we optimize towards the proxy reward mannequin utilizing both reinforcement studying or best-of-n sampling. We discover that this relationship follows a distinct useful kind relying on the tactic of optimization, and that in each circumstances its coefficients scale easily with the variety of reward mannequin parameters. We additionally examine the impact on this relationship of the scale of the reward mannequin dataset, the variety of reward mannequin and coverage parameters, and the coefficient of the KL penalty added to the reward within the reinforcement studying setup. We discover the implications of those empirical outcomes for theoretical concerns in AI alignment.


Introducing ChatGPT

Introducing Whisper