Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
30 posts
Reinforcement Learning
Wireheading
Reward Functions
11 posts
AI Capabilities
EfficientZero
Tradeoffs
10
Note on algorithms with multiple trained components
Steven Byrnes
7h
1
252
Reward is not the optimization target
TurnTrout
4mo
97
40
A Short Dialogue on the Meaning of Reward Functions
Leon Lang
1mo
0
86
Scaling Laws for Reward Model Overoptimization
leogao
2mo
11
69
Towards deconfusing wireheading and reward maximization
leogao
3mo
7
76
Seriously, what goes wrong with "reward the agent when it makes you smile"?
TurnTrout
4mo
41
42
Four usages of "loss" in AI
TurnTrout
2mo
18
32
Conditioning, Prompts, and Fine-Tuning
Adam Jermyn
4mo
9
82
Jitters No Evidence of Stupidity in RL
1a3orn
1y
18
25
Reward model hacking as a challenge for reward learning
Erik Jenner
8mo
1
59
Big picture of phasic dopamine
Steven Byrnes
1y
18
16
Value extrapolation vs Wireheading
Stuart_Armstrong
6mo
1
59
My take on Michael Littman on "The HCI of HAI"
Alex Flint
1y
4
47
Draft papers for REALab and Decoupled Approval on tampering
Jonathan Uesato
2y
2
74
Will we run out of ML data? Evidence from projecting dataset size trends
Pablo Villalobos
1mo
12
94
Evaluations project @ ARC is hiring a researcher and a webdev/engineer
Beth Barnes
3mo
7
273
EfficientZero: How It Works
1a3orn
1y
42
114
We have achieved Noob Gains in AI
phdead
7mo
21
35
It matters when the first sharp left turn happens
Adam Jermyn
2mo
9
134
EfficientZero: human ALE sample-efficiency w/MuZero+self-supervised
gwern
1y
52
77
OpenAI Solves (Some) Formal Math Olympiad Problems
Michaƫl Trazzi
10mo
26
34
Remaking EfficientZero (as best I can)
Hoagy
5mo
9
87
The alignment problem in different capability regimes
Buck
1y
12
51
Misc. questions about EfficientZero
Daniel Kokotajlo
1y
17
5
Epistemic Strategies of Safety-Capabilities Tradeoffs
adamShimi
1y
0