Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
18 posts
World Optimization
Covid-19
9 posts
AI Safety Camp
Ethics & Morality
Surveys
19
Some ideas for epistles to the AI ethicists
Charlie Steiner
3mo
0
43
What technologies could cause world GDP doubling times to be <8 years?
Daniel Kokotajlo
2y
44
56
AI x-risk reduction: why I chose academia over industry
David Scott Krueger (formerly: capybaralet)
1y
14
21
Reading the ethicists 2: Hunting for AI alignment papers
Charlie Steiner
6mo
1
44
Where are intentions to be found?
Alex Flint
1y
12
15
Consistencies as (meta-)preferences
Stuart_Armstrong
1y
0
10
Optimization, speculations on the X and only X problem.
Donald Hobson
1y
5
60
AI Research Considerations for Human Existential Safety (ARCHES)
habryka
2y
8
11
Pros and cons of working on near-term technical AI safety and assurance
Aryeh Englander
1y
1
7
AI Problems Shared by Non-AI Systems
VojtaKovarik
2y
2
143
Reshaping the AI Industry
Thane Ruthenis
6mo
34
46
Life and expanding steerable consequences
Alex Flint
1y
3
57
Safety-capabilities tradeoff dials are inevitable in AGI
Steven Byrnes
1y
4
17
Why you should minimax in two-player zero-sum games
Nisan
2y
1
93
Don't leave your fingerprints on the future
So8res
2mo
32
43
A survey of tool use and workflows in alignment research
Logan Riggs
9mo
5
33
By default, avoid ambiguous distant situations
Stuart_Armstrong
3y
15
56
"Existential risk from AI" survey results
Rob Bensinger
1y
8
28
Reflection Mechanisms as an Alignment target: A survey
Marius Hobbhahn
6mo
1
13
Reflection Mechanisms as an Alignment target: A follow-up survey
Marius Hobbhahn
2mo
2
95
Moral strategies at different capability levels
Richard_Ngo
4mo
14
76
Problems in AI Alignment that philosophers could potentially contribute to
Wei_Dai
3y
14
175
Morality is Scary
Wei_Dai
1y
125