Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
18 posts
World Optimization
Covid-19
9 posts
AI Safety Camp
Ethics & Morality
Surveys
78
Nearcast-based "deployment problem" analysis
HoldenKarnofsky
3mo
2
143
Reshaping the AI Industry
Thane Ruthenis
6mo
34
19
Some ideas for epistles to the AI ethicists
Charlie Steiner
3mo
0
116
How do we prepare for final crunch time?
Eli Tyre
1y
30
57
Safety-capabilities tradeoff dials are inevitable in AGI
Steven Byrnes
1y
4
135
Possible takeaways from the coronavirus pandemic for slow AI takeoff
Vika
2y
36
21
Reading the ethicists 2: Hunting for AI alignment papers
Charlie Steiner
6mo
1
56
AI x-risk reduction: why I chose academia over industry
David Scott Krueger (formerly: capybaralet)
1y
14
46
Life and expanding steerable consequences
Alex Flint
1y
3
44
Where are intentions to be found?
Alex Flint
1y
12
60
AI Research Considerations for Human Existential Safety (ARCHES)
habryka
2y
8
43
What technologies could cause world GDP doubling times to be <8 years?
Daniel Kokotajlo
2y
44
15
Consistencies as (meta-)preferences
Stuart_Armstrong
1y
0
21
Constraints from naturalized ethics.
Charlie Steiner
2y
0
93
Don't leave your fingerprints on the future
So8res
2mo
32
95
Moral strategies at different capability levels
Richard_Ngo
4mo
14
175
Morality is Scary
Wei_Dai
1y
125
13
Reflection Mechanisms as an Alignment target: A follow-up survey
Marius Hobbhahn
2mo
2
28
Reflection Mechanisms as an Alignment target: A survey
Marius Hobbhahn
6mo
1
43
A survey of tool use and workflows in alignment research
Logan Riggs
9mo
5
56
"Existential risk from AI" survey results
Rob Bensinger
1y
8
76
Problems in AI Alignment that philosophers could potentially contribute to
Wei_Dai
3y
14
33
By default, avoid ambiguous distant situations
Stuart_Armstrong
3y
15