Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
18 posts
World Optimization
Covid-19
9 posts
AI Safety Camp
Ethics & Morality
Surveys
20
Some ideas for epistles to the AI ethicists
Charlie Steiner
3mo
0
44
What technologies could cause world GDP doubling times to be <8 years?
Daniel Kokotajlo
2y
44
39
AI x-risk reduction: why I chose academia over industry
David Scott Krueger (formerly: capybaralet)
1y
14
19
Reading the ethicists 2: Hunting for AI alignment papers
Charlie Steiner
6mo
1
52
Where are intentions to be found?
Alex Flint
1y
12
21
Consistencies as (meta-)preferences
Stuart_Armstrong
1y
0
12
Optimization, speculations on the X and only X problem.
Donald Hobson
1y
5
78
AI Research Considerations for Human Existential Safety (ARCHES)
habryka
2y
8
10
Pros and cons of working on near-term technical AI safety and assurance
Aryeh Englander
1y
1
7
AI Problems Shared by Non-AI Systems
VojtaKovarik
2y
2
85
Reshaping the AI Industry
Thane Ruthenis
6mo
34
58
Life and expanding steerable consequences
Alex Flint
1y
3
76
Safety-capabilities tradeoff dials are inevitable in AGI
Steven Byrnes
1y
4
26
Why you should minimax in two-player zero-sum games
Nisan
2y
1
106
Don't leave your fingerprints on the future
So8res
2mo
32
28
A survey of tool use and workflows in alignment research
Logan Riggs
9mo
5
46
By default, avoid ambiguous distant situations
Stuart_Armstrong
3y
15
63
"Existential risk from AI" survey results
Rob Bensinger
1y
8
26
Reflection Mechanisms as an Alignment target: A survey
Marius Hobbhahn
6mo
1
14
Reflection Mechanisms as an Alignment target: A follow-up survey
Marius Hobbhahn
2mo
2
107
Moral strategies at different capability levels
Richard_Ngo
4mo
14
88
Problems in AI Alignment that philosophers could potentially contribute to
Wei_Dai
3y
14
173
Morality is Scary
Wei_Dai
1y
125