Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
50 posts
Reinforcement Learning
Inverse Reinforcement Learning
Road To AI Safety Excellence
23 posts
Wireheading
Reward Functions
218
Reward is not the optimization target
TurnTrout
4mo
97
5
AGIs may value intrinsic rewards more than extrinsic ones
catubc
1mo
6
84
Jitters No Evidence of Stupidity in RL
1a3orn
1y
18
24
Is CIRL a promising agenda?
Chris_Leong
6mo
12
10
A Survey of Foundational Methods in Inverse Reinforcement Learning
adamk
3mo
0
63
My take on Michael Littman on "The HCI of HAI"
Alex Flint
1y
4
92
Book Review: Human Compatible
Scott Alexander
2y
6
11
RLHF
Ansh Radhakrishnan
7mo
5
56
Thoughts on "Human-Compatible"
TurnTrout
3y
35
49
Book review: Human Compatible
PeterMcCluskey
2y
2
63
RAISE is launching their MVP
3y
1
46
Learning biases and rewards simultaneously
Rohin Shah
3y
3
38
Model Mis-specification and Inverse Reinforcement Learning
Owain_Evans
4y
3
26
Reinforcement learning with imperceptible rewards
Vanessa Kosoy
3y
1
13
Note on algorithms with multiple trained components
Steven Byrnes
6h
1
35
A Short Dialogue on the Meaning of Reward Functions
Leon Lang
1mo
0
16
generalized wireheading
carado
1mo
7
77
Seriously, what goes wrong with "reward the agent when it makes you smile"?
TurnTrout
4mo
41
11
An investigation into when agents may be incentivized to manipulate our beliefs.
Felix Hofstätter
3mo
0
27
$100/$50 rewards for good references
Stuart_Armstrong
1y
5
51
Draft papers for REALab and Decoupled Approval on tampering
Jonathan Uesato
2y
2
158
Are wireheads happy?
Scott Alexander
12y
107
29
Defining AI wireheading
Stuart_Armstrong
3y
9
36
Thoughts on reward engineering
paulfchristiano
3y
30
34
Wireheading is in the eye of the beholder
Stuart_Armstrong
3y
10
22
Wireheading and discontinuity
Michele Campolo
2y
4
33
Wireheading as a potential problem with the new impact measure
Stuart_Armstrong
4y
20
58
The Stamp Collector
So8res
7y
14