Go Back
You can't go any further
You can't go any further
meritocratic
regular
democratic
hot
top
alive
15 posts
Threat Models
6 posts
Sharp Left Turn
47
AI X-risk >35% mostly based on a recent peer-reviewed argument
michaelcohen
1mo
31
148
Clarifying AI X-risk
zac_kenton
1mo
23
68
Threat Model Literature Review
zac_kenton
1mo
4
88
What Failure Looks Like: Distilling the Discussion
Ben Pace
2y
14
165
AI Could Defeat All Of Us Combined
HoldenKarnofsky
6mo
29
253
Another (outer) alignment failure story
paulfchristiano
1y
38
266
What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)
Andrew_Critch
1y
60
55
Rogue AGI Embodies Valuable Intellectual Property
Mark Xu
1y
9
437
What failure looks like
paulfchristiano
3y
49
42
AI Neorealism: a threat model & success criterion for existential safety
davidad
5d
0
73
Survey on AI existential risk scenarios
Sam Clarke
1y
11
116
Less Realistic Tales of Doom
Mark Xu
1y
13
38
Vignettes Workshop (AI Impacts)
Daniel Kokotajlo
1y
3
97
Distinguishing AI takeover scenarios
Sam Clarke
1y
11
14
How is the "sharp left turn defined"?
Chris_Leong
12d
3
70
We may be able to see sharp left turns coming
Ethan Perez
3mo
26
31
Refining the Sharp Left Turn threat model, part 2: applying alignment techniques
Vika
25d
4
309
A central AI alignment problem: capabilities generalization, and the sharp left turn
So8res
6mo
48
33
It matters when the first sharp left turn happens
Adam Jermyn
2mo
9
87
Refining the Sharp Left Turn threat model, part 1: claims and mechanisms
Vika
4mo
3