Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
18 posts
World Optimization
Covid-19
9 posts
AI Safety Camp
Ethics & Morality
Surveys
201
Reshaping the AI Industry
Thane Ruthenis
6mo
34
122
How do we prepare for final crunch time?
Eli Tyre
1y
30
111
Possible takeaways from the coronavirus pandemic for slow AI takeoff
Vika
2y
36
76
Nearcast-based "deployment problem" analysis
HoldenKarnofsky
3mo
2
73
AI x-risk reduction: why I chose academia over industry
David Scott Krueger (formerly: capybaralet)
1y
14
42
What technologies could cause world GDP doubling times to be <8 years?
Daniel Kokotajlo
2y
44
42
AI Research Considerations for Human Existential Safety (ARCHES)
habryka
2y
8
38
Safety-capabilities tradeoff dials are inevitable in AGI
Steven Byrnes
1y
4
36
Where are intentions to be found?
Alex Flint
1y
12
34
Life and expanding steerable consequences
Alex Flint
1y
3
23
Reading the ethicists 2: Hunting for AI alignment papers
Charlie Steiner
6mo
1
18
Some ideas for epistles to the AI ethicists
Charlie Steiner
3mo
0
13
Constraints from naturalized ethics.
Charlie Steiner
2y
0
12
Pros and cons of working on near-term technical AI safety and assurance
Aryeh Englander
1y
1
177
Morality is Scary
Wei_Dai
1y
125
83
Moral strategies at different capability levels
Richard_Ngo
4mo
14
80
Don't leave your fingerprints on the future
So8res
2mo
32
64
Problems in AI Alignment that philosophers could potentially contribute to
Wei_Dai
3y
14
58
A survey of tool use and workflows in alignment research
Logan Riggs
9mo
5
49
"Existential risk from AI" survey results
Rob Bensinger
1y
8
30
Reflection Mechanisms as an Alignment target: A survey
Marius Hobbhahn
6mo
1
20
By default, avoid ambiguous distant situations
Stuart_Armstrong
3y
15
12
Reflection Mechanisms as an Alignment target: A follow-up survey
Marius Hobbhahn
2mo
2