Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
18 posts
World Optimization
Covid-19
9 posts
AI Safety Camp
Ethics & Morality
Surveys
80
Nearcast-based "deployment problem" analysis
HoldenKarnofsky
3mo
2
85
Reshaping the AI Industry
Thane Ruthenis
6mo
34
20
Some ideas for epistles to the AI ethicists
Charlie Steiner
3mo
0
76
Safety-capabilities tradeoff dials are inevitable in AGI
Steven Byrnes
1y
4
110
How do we prepare for final crunch time?
Eli Tyre
1y
30
159
Possible takeaways from the coronavirus pandemic for slow AI takeoff
Vika
2y
36
19
Reading the ethicists 2: Hunting for AI alignment papers
Charlie Steiner
6mo
1
58
Life and expanding steerable consequences
Alex Flint
1y
3
52
Where are intentions to be found?
Alex Flint
1y
12
78
AI Research Considerations for Human Existential Safety (ARCHES)
habryka
2y
8
39
AI x-risk reduction: why I chose academia over industry
David Scott Krueger (formerly: capybaralet)
1y
14
44
What technologies could cause world GDP doubling times to be <8 years?
Daniel Kokotajlo
2y
44
21
Consistencies as (meta-)preferences
Stuart_Armstrong
1y
0
29
Constraints from naturalized ethics.
Charlie Steiner
2y
0
106
Don't leave your fingerprints on the future
So8res
2mo
32
107
Moral strategies at different capability levels
Richard_Ngo
4mo
14
173
Morality is Scary
Wei_Dai
1y
125
14
Reflection Mechanisms as an Alignment target: A follow-up survey
Marius Hobbhahn
2mo
2
26
Reflection Mechanisms as an Alignment target: A survey
Marius Hobbhahn
6mo
1
28
A survey of tool use and workflows in alignment research
Logan Riggs
9mo
5
63
"Existential risk from AI" survey results
Rob Bensinger
1y
8
88
Problems in AI Alignment that philosophers could potentially contribute to
Wei_Dai
3y
14
46
By default, avoid ambiguous distant situations
Stuart_Armstrong
3y
15