Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
43 posts
World Optimization
Practical
AI Safety Camp
Ethics & Morality
Symbol Grounding
Security Mindset
Software Tools
Surveys
Careers
Updated Beliefs (examples of)
Organizational Culture & Design
Covid-19
8 posts
Existential Risk
Academic Papers
314
How To Get Into Independent Research On Alignment/Agency
johnswentworth
1y
33
270
Six Dimensions of Operational Adequacy in AGI Projects
Eliezer Yudkowsky
6mo
65
175
Morality is Scary
Wei_Dai
1y
125
143
Reshaping the AI Industry
Thane Ruthenis
6mo
34
135
Possible takeaways from the coronavirus pandemic for slow AI takeoff
Vika
2y
36
118
An Update on Academia vs. Industry (one year into my faculty job)
David Scott Krueger (formerly: capybaralet)
3mo
18
116
How do we prepare for final crunch time?
Eli Tyre
1y
30
95
Moral strategies at different capability levels
Richard_Ngo
4mo
14
94
List of resolved confusions about IDA
Wei_Dai
3y
18
94
Thoughts on AGI organizations and capabilities work
Rob Bensinger
13d
17
93
Don't leave your fingerprints on the future
So8res
2mo
32
88
Linkpost: Github Copilot productivity experiment
Daniel Kokotajlo
3mo
4
78
Nearcast-based "deployment problem" analysis
HoldenKarnofsky
3mo
2
76
Problems in AI Alignment that philosophers could potentially contribute to
Wei_Dai
3y
14
199
Some AI research areas and their relevance to existential safety
Andrew_Critch
2y
40
41
A list of good heuristics that the case for AI x-risk fails
David Scott Krueger (formerly: capybaralet)
3y
14
40
[Linkpost] Existential Risk Analysis in Empirical Research Papers
Dan H
5mo
0
35
New paper: Corrigibility with Utility Preservation
Koen.Holtman
3y
11
35
The Dumbest Possible Gets There First
Artaxerxes
4mo
7
28
What I talk about when I talk about AI x-risk: 3 core claims I want machine learning researchers to address.
David Scott Krueger (formerly: capybaralet)
3y
13
23
Techniques for optimizing worst-case performance
paulfchristiano
3y
12
18
Concrete Advice for Forming Inside Views on AI Safety
Neel Nanda
4mo
6