Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
32 posts
Threat Models
Coordination / Cooperation
Sharp Left Turn
Fiction
AI Risk Concrete Stories
Site Meta
AMA
Multipolar Scenarios
Prisoner's Dilemma
Moloch
Paperclip Maximizer
Q&A (format)
51 posts
World Optimization
Existential Risk
Practical
Academic Papers
Ethics & Morality
AI Safety Camp
Symbol Grounding
Security Mindset
Software Tools
Careers
Surveys
Updated Beliefs (examples of)
517
It Looks Like You're Trying To Take Over The World
gwern
9mo
125
416
What failure looks like
paulfchristiano
3y
49
292
A central AI alignment problem: capabilities generalization, and the sharp left turn
So8res
6mo
48
252
What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)
Andrew_Critch
1y
60
240
Another (outer) alignment failure story
paulfchristiano
1y
38
189
The next decades might be wild
Marius Hobbhahn
5d
21
142
Clarifying “What failure looks like”
Sam Clarke
2y
14
136
AI coordination needs clear wins
evhub
3mo
15
120
Late 2021 MIRI Conversations: AMA / Discussion
Rob Bensinger
9mo
208
116
Prisoners' Dilemma with Costs to Modeling
Scott Garrabrant
4y
20
111
Less Realistic Tales of Doom
Mark Xu
1y
13
103
Welcome & FAQ!
Ruby
1y
8
91
Distinguishing AI takeover scenarios
Sam Clarke
1y
11
86
AI takeoff story: a continuation of progress by other means
Edouard Harris
1y
13
413
How To Get Into Independent Research On Alignment/Agency
johnswentworth
1y
33
284
Six Dimensions of Operational Adequacy in AGI Projects
Eliezer Yudkowsky
6mo
65
207
Some AI research areas and their relevance to existential safety
Andrew_Critch
2y
40
201
Reshaping the AI Industry
Thane Ruthenis
6mo
34
177
Morality is Scary
Wei_Dai
1y
125
144
An Update on Academia vs. Industry (one year into my faculty job)
David Scott Krueger (formerly: capybaralet)
3mo
18
122
How do we prepare for final crunch time?
Eli Tyre
1y
30
111
Possible takeaways from the coronavirus pandemic for slow AI takeoff
Vika
2y
36
94
Linkpost: Github Copilot productivity experiment
Daniel Kokotajlo
3mo
4
83
Moral strategies at different capability levels
Richard_Ngo
4mo
14
82
Thoughts on AGI organizations and capabilities work
Rob Bensinger
13d
17
80
Don't leave your fingerprints on the future
So8res
2mo
32
76
Nearcast-based "deployment problem" analysis
HoldenKarnofsky
3mo
2
75
AI Safety Papers: An App for the TAI Safety Database
ozziegooen
1y
13