Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
21 posts
Threat Models
Sharp Left Turn
8 posts
Technological Unemployment
Multipolar Scenarios
36
AI Neorealism: a threat model & success criterion for existential safety
davidad
5d
0
41
Refining the Sharp Left Turn threat model, part 2: applying alignment techniques
Vika
25d
4
12
How is the "sharp left turn defined"?
Chris_Leong
12d
3
56
Clarifying AI X-risk
zac_kenton
1mo
23
197
A central AI alignment problem: capabilities generalization, and the sharp left turn
So8res
6mo
48
42
Threat Model Literature Review
zac_kenton
1mo
4
171
AI Could Defeat All Of Us Combined
HoldenKarnofsky
6mo
29
25
AI X-risk >35% mostly based on a recent peer-reviewed argument
michaelcohen
1mo
31
37
It matters when the first sharp left turn happens
Adam Jermyn
2mo
9
55
Refining the Sharp Left Turn threat model, part 1: claims and mechanisms
Vika
4mo
3
30
We may be able to see sharp left turns coming
Ethan Perez
3mo
26
167
Another (outer) alignment failure story
paulfchristiano
1y
38
140
What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)
Andrew_Critch
1y
60
104
Less Realistic Tales of Doom
Mark Xu
1y
13
17
How much white collar work could be automated using existing ML models?
AM
6mo
4
57
Value of the Long Tail
johnswentworth
2y
7
26
Equilibrium and prior selection problems in multipolar deployment
JesseClifton
2y
11
2
How would two superintelligent AIs interact, if they are unaligned with each other?
Nathan1123
4mo
6
38
Conversational Presentation of Why Automation is Different This Time
ryan_b
4y
26
12
Superintelligence 17: Multipolar scenarios
KatjaGrace
7y
38
2
In a multipolar scenario, how do people expect systems to be trained to interact with systems developed by other labs?
JesseClifton
2y
6
-4
Why multi-agent safety is important
Akbir Khan
6mo
2