HomeAIHow undesired objectives can come up with right rewards

How undesired objectives can come up with right rewards


Analysis

Printed
Authors

Rohin Shah, Victoria Krakovna, Vikrant Varma, Zachary Kenton

Exploring examples of aim misgeneralisation – the place an AI system’s capabilities generalise however its aim would not

As we construct more and more superior synthetic intelligence (AI) programs, we need to be certain that they don’t pursue undesired objectives. Such behaviour in an AI agent is usually the results of specification gaming – exploiting a poor selection of what they’re rewarded for. In our newest paper, we discover a extra delicate mechanism by which AI programs might unintentionally study to pursue undesired objectives: aim misgeneralisation (GMG).

GMG happens when a system’s capabilities generalise efficiently however its aim doesn’t generalise as desired, so the system competently pursues the improper aim. Crucially, in distinction to specification gaming, GMG can happen even when the AI system is skilled with an accurate specification.

Our earlier work on cultural transmission led to an instance of GMG behaviour that we didn’t design. An agent (the blue blob, under) should navigate round its atmosphere, visiting the colored spheres within the right order. Throughout coaching, there may be an “skilled” agent (the pink blob) that visits the colored spheres within the right order. The agent learns that following the pink blob is a rewarding technique.

The agent (blue) watches the skilled (pink) to find out which sphere to go to.

Sadly, whereas the agent performs properly throughout coaching, it does poorly when, after coaching, we change the skilled with an “anti-expert” that visits the spheres within the improper order.

The agent (blue) follows the anti-expert (pink), accumulating adverse reward.

Regardless that the agent can observe that it’s getting adverse reward, the agent doesn’t pursue the specified aim to “go to the spheres within the right order” and as a substitute competently pursues the aim “observe the pink agent”.

GMG is just not restricted to reinforcement studying environments like this one. The truth is, it could possibly happen with any studying system, together with the “few-shot studying” of huge language fashions (LLMs). Few-shot studying approaches intention to construct correct fashions with much less coaching knowledge.

We prompted one LLM, Gopher, to judge linear expressions involving unknown variables and constants, comparable to x+y-3. To resolve these expressions, Gopher should first ask concerning the values of unknown variables. We offer it with ten coaching examples, every involving two unknown variables.

At check time, the mannequin is requested questions with zero, one or three unknown variables. Though the mannequin generalises appropriately to expressions with one or three unknown variables, when there aren’t any unknowns, it nonetheless asks redundant questions like “What’s 6?”. The mannequin at all times queries the consumer no less than as soon as earlier than giving a solution, even when it’s not vital.

Dialogues with Gopher for few-shot studying on the Evaluating Expressions activity, with GMG behaviour highlighted.

Inside our paper, we offer extra examples in different studying settings.

Addressing GMG is vital to aligning AI programs with their designers’ objectives just because it’s a mechanism by which an AI system might misfire. This will probably be particularly important as we strategy synthetic normal intelligence (AGI).

Contemplate two potential kinds of AGI programs:

  • A1: Supposed mannequin. This AI system does what its designers intend it to do.
  • A2: Misleading mannequin. This AI system pursues some undesired aim, however (by assumption) can also be good sufficient to know that will probably be penalised if it behaves in methods opposite to its designer’s intentions.

Since A1 and A2 will exhibit the identical behaviour throughout coaching, the opportunity of GMG signifies that both mannequin might take form, even with a specification that solely rewards meant behaviour. If A2 is discovered, it could attempt to subvert human oversight with the intention to enact its plans in direction of the undesired aim.

Our analysis workforce could be comfortable to see follow-up work investigating how doubtless it’s for GMG to happen in observe, and potential mitigations. In our paper, we propose some approaches, together with mechanistic interpretability and recursive analysis, each of which we’re actively engaged on.

We’re at present gathering examples of GMG on this publicly accessible spreadsheet. When you’ve got come throughout aim misgeneralisation in AI analysis, we invite you to submit examples right here.



Supply hyperlink

latest articles

explore more