Go Back
You can't go any further
Choose this branch
meritocratic
regular
democratic
hot
top
alive
23 posts
Iterated Amplification
14 posts
Humans Consulting HCH
Delegation
47
Notes on OpenAI’s alignment plan
Alex Flint
12d
5
125
Debate update: Obfuscated arguments problem
Beth Barnes
1y
21
119
My Understanding of Paul Christiano's Iterated Amplification AI Safety Research Agenda
Chi Nguyen
2y
21
74
Model splintering: moving from one imperfect model to another
Stuart_Armstrong
2y
10
125
Paul's research agenda FAQ
zhukeepa
4y
73
61
Relaxed adversarial training for inner alignment
evhub
3y
28
49
Machine Learning Projects on IDA
Owain_Evans
3y
3
33
Synthesizing amplification and debate
evhub
2y
10
47
Directions and desiderata for AI alignment
paulfchristiano
3y
1
45
Iterated Distillation and Amplification
Ajeya Cotra
4y
13
42
Preface to the sequence on iterated amplification
paulfchristiano
4y
8
30
Thoughts on reward engineering
paulfchristiano
3y
30
19
How does iterated amplification exceed human abilities?
riceissa
2y
9
29
Supervising strong learners by amplifying weak experts
paulfchristiano
3y
1
57
Garrabrant and Shah on human modeling in AGI
Rob Bensinger
1y
10
42
HCH Speculation Post #2A
Charlie Steiner
1y
7
28
Universality Unwrapped
adamShimi
2y
2
10
Universality and the “Filter”
maggiehayes
1y
3
41
HCH is not just Mechanical Turk
William_S
3y
6
30
What are the differences between all the iterative/recursive approaches to AI alignment?
riceissa
3y
14
35
What's wrong with these analogies for understanding Informed Oversight and IDA?
Wei_Dai
3y
3
15
Mapping the Conceptual Territory in AI Existential Safety and Alignment
jbkjr
1y
0
34
Can HCH epistemically dominate Ramanujan?
zhukeepa
3y
4
32
Humans Consulting HCH
paulfchristiano
4y
10
27
Towards formalizing universality
paulfchristiano
3y
19
20
Meta-execution
paulfchristiano
4y
1
7
Predicting HCH using expert advice
jessicata
6y
0
1
HCH as a measure of manipulation
orthonormal
5y
0