Go Back
Choose this branch
You can't go any further
meritocratic
regular
democratic
hot
top
alive
15 posts
Inverse Reinforcement Learning
Road To AI Safety Excellence
35 posts
Reinforcement Learning
77
Book Review: Human Compatible
Scott Alexander
2y
6
67
RAISE is launching their MVP
3y
1
63
Thoughts on "Human-Compatible"
TurnTrout
3y
35
41
Learning biases and rewards simultaneously
Rohin Shah
3y
3
37
Book review: Human Compatible
PeterMcCluskey
2y
2
33
Model Mis-specification and Inverse Reinforcement Learning
Owain_Evans
4y
3
28
AI Safety Prerequisites Course: Basic abstract representations of computation
RAISE
3y
2
25
Is CIRL a promising agenda?
Chris_Leong
6mo
12
20
IRL 1/8: Inverse Reinforcement Learning and the problem of degeneracy
RAISE
3y
2
18
Our plan for 2019-2020: consulting for AI Safety education
RAISE
3y
17
18
RAISE AI Safety prerequisites map entirely in one post
RAISE
3y
5
16
A Survey of Foundational Methods in Inverse Reinforcement Learning
adamk
3mo
0
8
Biased reward-learning in CIRL
Stuart_Armstrong
4y
3
3
CIRL Wireheading
tom4everitt
5y
0
252
Reward is not the optimization target
TurnTrout
4mo
97
82
Jitters No Evidence of Stupidity in RL
1a3orn
1y
18
59
My take on Michael Littman on "The HCI of HAI"
Alex Flint
1y
4
33
Making a Difference Tempore: Insights from 'Reinforcement Learning: An Introduction'
TurnTrout
4y
6
33
Reinforcement Learning: A Non-Standard Introduction (Part 1)
royf
10y
19
26
Reinforcement learning with imperceptible rewards
Vanessa Kosoy
3y
1
25
Reinforcement Learning in the Iterated Amplification Framework
William_S
3y
12
19
Applying reinforcement learning theory to reduce felt temporal distance
Kaj_Sotala
8y
6
19
Evolution as Backstop for Reinforcement Learning: multi-level paradigms
gwern
3y
0
19
"Human-level control through deep reinforcement learning" - computer learns 49 different games
skeptical_lurker
7y
19
16
RLHF
Ansh Radhakrishnan
7mo
5
16
Reinforcement Learning: A Non-Standard Introduction (Part 2)
royf
10y
7
15
Delegative Inverse Reinforcement Learning
Vanessa Kosoy
5y
0
15
Scalar reward is not enough for aligned AGI
Peter Vamplew
11mo
3