Go Back
Choose this branch
You can't go any further
meritocratic
regular
democratic
hot
top
alive
7 posts
AI Success Models
Market making (AI safety technique)
Verification
2 posts
Conservatism (AI)
111
Conversation with Eliezer: What do you want the system to do?
Akash
5mo
38
81
A positive case for how we might succeed at prosaic AI alignment
evhub
1y
47
65
Various Alignment Strategies (and how likely they are to work)
Logan Zoellner
7mo
34
63
Solving the whole AGI control problem, version 0.0001
Steven Byrnes
1y
7
20
Interpretability’s Alignment-Solving Potential: Analysis of 7 Scenarios
Evan R. Murphy
7mo
0
14
If AGI were coming in a year, what should we do?
MichaelStJules
8mo
16
10
An Open Agency Architecture for Safe Transformative AI
davidad
11h
11
33
Pessimism About Unknown Unknowns Inspires Conservatism
michaelcohen
2y
2
17
RFC: Philosophical Conservatism in AI Alignment Research
Gordon Seidoh Worley
4y
13