Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
18 posts
World Optimization
Covid-19
9 posts
AI Safety Camp
Ethics & Morality
Surveys
18
Some ideas for epistles to the AI ethicists
Charlie Steiner
3mo
0
42
What technologies could cause world GDP doubling times to be <8 years?
Daniel Kokotajlo
2y
44
73
AI x-risk reduction: why I chose academia over industry
David Scott Krueger (formerly: capybaralet)
1y
14
23
Reading the ethicists 2: Hunting for AI alignment papers
Charlie Steiner
6mo
1
36
Where are intentions to be found?
Alex Flint
1y
12
9
Consistencies as (meta-)preferences
Stuart_Armstrong
1y
0
8
Optimization, speculations on the X and only X problem.
Donald Hobson
1y
5
42
AI Research Considerations for Human Existential Safety (ARCHES)
habryka
2y
8
12
Pros and cons of working on near-term technical AI safety and assurance
Aryeh Englander
1y
1
7
AI Problems Shared by Non-AI Systems
VojtaKovarik
2y
2
201
Reshaping the AI Industry
Thane Ruthenis
6mo
34
34
Life and expanding steerable consequences
Alex Flint
1y
3
38
Safety-capabilities tradeoff dials are inevitable in AGI
Steven Byrnes
1y
4
8
Why you should minimax in two-player zero-sum games
Nisan
2y
1
80
Don't leave your fingerprints on the future
So8res
2mo
32
58
A survey of tool use and workflows in alignment research
Logan Riggs
9mo
5
20
By default, avoid ambiguous distant situations
Stuart_Armstrong
3y
15
49
"Existential risk from AI" survey results
Rob Bensinger
1y
8
30
Reflection Mechanisms as an Alignment target: A survey
Marius Hobbhahn
6mo
1
12
Reflection Mechanisms as an Alignment target: A follow-up survey
Marius Hobbhahn
2mo
2
83
Moral strategies at different capability levels
Richard_Ngo
4mo
14
64
Problems in AI Alignment that philosophers could potentially contribute to
Wei_Dai
3y
14
177
Morality is Scary
Wei_Dai
1y
125