Go Back
Choose this branch
You can't go any further
meritocratic
regular
democratic
hot
top
alive
7 posts
AI Success Models
Market making (AI safety technique)
Verification
2 posts
Conservatism (AI)
13
An Open Agency Architecture for Safe Transformative AI
davidad
11h
11
112
Conversation with Eliezer: What do you want the system to do?
Akash
5mo
38
73
Various Alignment Strategies (and how likely they are to work)
Logan Zoellner
7mo
34
45
Interpretability’s Alignment-Solving Potential: Analysis of 7 Scenarios
Evan R. Murphy
7mo
0
78
A positive case for how we might succeed at prosaic AI alignment
evhub
1y
47
60
Solving the whole AGI control problem, version 0.0001
Steven Byrnes
1y
7
20
If AGI were coming in a year, what should we do?
MichaelStJules
8mo
16
31
Pessimism About Unknown Unknowns Inspires Conservatism
michaelcohen
2y
2
17
RFC: Philosophical Conservatism in AI Alignment Research
Gordon Seidoh Worley
4y
13