Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
7 posts
Center for Human-Compatible AI (CHAI)
Future of Humanity Institute (FHI)
Regulation and AI Risk
Future of Life Institute (FLI)
Altruism
Summaries
4 posts
Moral Uncertainty
Utilitarianism
Population Ethics
Disagreement
20
The Slippery Slope from DALLE-2 to Deepfake Anarchy
scasper
1mo
9
30
Learning preferences by looking at the world
Rohin Shah
3y
10
24
CHAI, Assistance Games, And Fully-Updated Deference [Scott Alexander]
berglund
2mo
1
127
2019 AI Alignment Literature Review and Charity Comparison
Larks
3y
18
8
Self-regulation of safety in AI research
Gordon Seidoh Worley
4y
6
203
2018 AI Alignment Literature Review and Charity Comparison
Larks
4y
26
50
[AN #69] Stuart Russell's new book on why we need to replace the standard model of AI
Rohin Shah
3y
12
45
AI Alignment Podcast: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah
Palus Astra
2y
27
14
Example population ethics: ordered discounted utility
Stuart_Armstrong
3y
16
54
Comparing Utilities
abramdemski
2y
31
18
RFC: Meta-ethical uncertainty in AGI alignment
Gordon Seidoh Worley
4y
6