Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
18 posts
World Optimization
Covid-19
9 posts
AI Safety Camp
Ethics & Morality
Surveys
159
Possible takeaways from the coronavirus pandemic for slow AI takeoff
Vika
2y
36
110
How do we prepare for final crunch time?
Eli Tyre
1y
30
85
Reshaping the AI Industry
Thane Ruthenis
6mo
34
80
Nearcast-based "deployment problem" analysis
HoldenKarnofsky
3mo
2
78
AI Research Considerations for Human Existential Safety (ARCHES)
habryka
2y
8
76
Safety-capabilities tradeoff dials are inevitable in AGI
Steven Byrnes
1y
4
58
Life and expanding steerable consequences
Alex Flint
1y
3
52
Where are intentions to be found?
Alex Flint
1y
12
44
What technologies could cause world GDP doubling times to be <8 years?
Daniel Kokotajlo
2y
44
39
AI x-risk reduction: why I chose academia over industry
David Scott Krueger (formerly: capybaralet)
1y
14
29
Constraints from naturalized ethics.
Charlie Steiner
2y
0
26
Why you should minimax in two-player zero-sum games
Nisan
2y
1
21
Consistencies as (meta-)preferences
Stuart_Armstrong
1y
0
20
Some ideas for epistles to the AI ethicists
Charlie Steiner
3mo
0
173
Morality is Scary
Wei_Dai
1y
125
107
Moral strategies at different capability levels
Richard_Ngo
4mo
14
106
Don't leave your fingerprints on the future
So8res
2mo
32
88
Problems in AI Alignment that philosophers could potentially contribute to
Wei_Dai
3y
14
63
"Existential risk from AI" survey results
Rob Bensinger
1y
8
46
By default, avoid ambiguous distant situations
Stuart_Armstrong
3y
15
28
A survey of tool use and workflows in alignment research
Logan Riggs
9mo
5
26
Reflection Mechanisms as an Alignment target: A survey
Marius Hobbhahn
6mo
1
14
Reflection Mechanisms as an Alignment target: A follow-up survey
Marius Hobbhahn
2mo
2