Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
21 posts
Threat Models
Sharp Left Turn
8 posts
Technological Unemployment
Multipolar Scenarios
319
What failure looks like
paulfchristiano
3y
49
253
A central AI alignment problem: capabilities generalization, and the sharp left turn
So8res
6mo
48
210
Another (outer) alignment failure story
paulfchristiano
1y
38
203
What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)
Andrew_Critch
1y
60
168
AI Could Defeat All Of Us Combined
HoldenKarnofsky
6mo
29
110
Less Realistic Tales of Doom
Mark Xu
1y
13
102
Clarifying AI X-risk
zac_kenton
1mo
23
79
What Failure Looks Like: Distilling the Discussion
Ben Pace
2y
14
71
Refining the Sharp Left Turn threat model, part 1: claims and mechanisms
Vika
4mo
3
70
Rogue AGI Embodies Valuable Intellectual Property
Mark Xu
1y
9
67
Distinguishing AI takeover scenarios
Sam Clarke
1y
11
60
Survey on AI existential risk scenarios
Sam Clarke
1y
11
55
Threat Model Literature Review
zac_kenton
1mo
4
50
We may be able to see sharp left turns coming
Ethan Perez
3mo
26
48
Value of the Long Tail
johnswentworth
2y
7
33
Conversational Presentation of Why Automation is Different This Time
ryan_b
4y
26
25
How much white collar work could be automated using existing ML models?
AM
6mo
4
20
Equilibrium and prior selection problems in multipolar deployment
JesseClifton
2y
11
11
In a multipolar scenario, how do people expect systems to be trained to interact with systems developed by other labs?
JesseClifton
2y
6
9
Superintelligence 17: Multipolar scenarios
KatjaGrace
7y
38
8
Why multi-agent safety is important
Akbir Khan
6mo
2
4
How would two superintelligent AIs interact, if they are unaligned with each other?
Nathan1123
4mo
6