Go Back
You can't go any further
You can't go any further
meritocratic
regular
democratic
hot
top
alive
26 posts
Iterated Amplification
17 posts
Humans Consulting HCH
50
Notes on OpenAI’s alignment plan
Alex Flint
12d
5
61
Relaxed adversarial training for inner alignment
evhub
3y
28
132
Debate update: Obfuscated arguments problem
Beth Barnes
1y
21
111
Paul's research agenda FAQ
zhukeepa
4y
73
28
The reward engineering problem
paulfchristiano
3y
3
27
Reliability amplification
paulfchristiano
3y
3
28
Approval-directed bootstrapping
paulfchristiano
4y
0
30
Approval-directed agents
paulfchristiano
4y
11
26
Explanation of Paul's AI-Alignment agenda by Ajeya Cotra
habryka
4y
0
39
Preface to the sequence on iterated amplification
paulfchristiano
4y
8
44
Iterated Distillation and Amplification
Ajeya Cotra
4y
13
19
Amplification Discussion Notes
William_S
4y
3
15
Benign model-free RL
paulfchristiano
4y
1
117
My Understanding of Paul Christiano's Iterated Amplification AI Safety Research Agenda
Chi Nguyen
2y
21
27
Meta-execution
paulfchristiano
4y
1
45
HCH is not just Mechanical Turk
William_S
3y
6
31
Can HCH epistemically dominate Ramanujan?
zhukeepa
3y
4
6
Predicting HCH using expert advice
jessicata
6y
0
1
HCH as a measure of manipulation
orthonormal
5y
0
20
Epistemology of HCH
adamShimi
1y
2
15
Mapping the Conceptual Territory in AI Existential Safety and Alignment
jbkjr
1y
0
43
What's wrong with these analogies for understanding Informed Oversight and IDA?
Wei_Dai
3y
3
34
Towards formalizing universality
paulfchristiano
3y
19
48
HCH Speculation Post #2A
Charlie Steiner
1y
7
32
Humans Consulting HCH
paulfchristiano
4y
10
60
Relating HCH and Logical Induction
abramdemski
2y
4
3
Universality and the “Filter”
maggiehayes
1y
3
68
Garrabrant and Shah on human modeling in AGI
Rob Bensinger
1y
10