Go Back
You can't go any further
You can't go any further
meritocratic
regular
democratic
hot
top
alive
10 posts
Threat Models
4 posts
Sharp Left Turn
416
What failure looks like
paulfchristiano
3y
49
240
Another (outer) alignment failure story
paulfchristiano
1y
38
111
Less Realistic Tales of Doom
Mark Xu
1y
13
91
Distinguishing AI takeover scenarios
Sam Clarke
1y
11
69
Survey on AI existential risk scenarios
Sam Clarke
1y
11
53
Rogue AGI Embodies Valuable Intellectual Property
Mark Xu
1y
9
44
AI X-risk >35% mostly based on a recent peer-reviewed argument
michaelcohen
1mo
31
40
AI Neorealism: a threat model & success criterion for existential safety
davidad
5d
0
36
Vignettes Workshop (AI Impacts)
Daniel Kokotajlo
1y
3
28
Investigating AI Takeover Scenarios
Sammy Martin
1y
1
292
A central AI alignment problem: capabilities generalization, and the sharp left turn
So8res
6mo
48
83
Refining the Sharp Left Turn threat model, part 1: claims and mechanisms
Vika
4mo
3
67
We may be able to see sharp left turns coming
Ethan Perez
3mo
26
30
Refining the Sharp Left Turn threat model, part 2: applying alignment techniques
Vika
25d
4