Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
7 posts
Center for Human-Compatible AI (CHAI)
Future of Humanity Institute (FHI)
Regulation and AI Risk
Future of Life Institute (FLI)
Altruism
Summaries
4 posts
Moral Uncertainty
Utilitarianism
Population Ethics
Disagreement
177
2018 AI Alignment Literature Review and Charity Comparison
Larks
4y
26
133
2019 AI Alignment Literature Review and Charity Comparison
Larks
3y
18
70
[AN #69] Stuart Russell's new book on why we need to replace the standard model of AI
Rohin Shah
3y
12
56
Learning preferences by looking at the world
Rohin Shah
3y
10
18
CHAI, Assistance Games, And Fully-Updated Deference [Scott Alexander]
berglund
2mo
1
16
Self-regulation of safety in AI research
Gordon Seidoh Worley
4y
6
12
The Slippery Slope from DALLE-2 to Deepfake Anarchy
scasper
1mo
9
82
Comparing Utilities
abramdemski
2y
31
71
AI Alignment Podcast: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah
Palus Astra
2y
27
24
Example population ethics: ordered discounted utility
Stuart_Armstrong
3y
16
14
RFC: Meta-ethical uncertainty in AGI alignment
Gordon Seidoh Worley
4y
6