Go Back
Choose this branch
You can't go any further
meritocratic
regular
democratic
hot
top
alive
7 posts
AI Success Models
Market making (AI safety technique)
Verification
2 posts
Conservatism (AI)
16
An Open Agency Architecture for Safe Transformative AI
davidad
11h
11
113
Conversation with Eliezer: What do you want the system to do?
Akash
5mo
38
75
A positive case for how we might succeed at prosaic AI alignment
evhub
1y
47
81
Various Alignment Strategies (and how likely they are to work)
Logan Zoellner
7mo
34
70
Interpretability’s Alignment-Solving Potential: Analysis of 7 Scenarios
Evan R. Murphy
7mo
0
57
Solving the whole AGI control problem, version 0.0001
Steven Byrnes
1y
7
26
If AGI were coming in a year, what should we do?
MichaelStJules
8mo
16
17
RFC: Philosophical Conservatism in AI Alignment Research
Gordon Seidoh Worley
4y
13
29
Pessimism About Unknown Unknowns Inspires Conservatism
michaelcohen
2y
2