Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
303 posts
AI Risk
Whole Brain Emulation
Threat Models
Q&A (format)
Reading Group
Economic Consequences of AGI
Superintelligence
Sharp Left Turn
Multipolar Scenarios
Technological Unemployment
Missing Moods
Seed AI
460 posts
Anthropics
Existential Risk
Academic Papers
Sleeping Beauty Paradox
Paradoxes
Great Filter
Space Exploration & Colonization
Simulation Hypothesis
Longtermism
Extraterrestrial Life
Pascal's Mugging
Grabby Aliens
1043
AGI Ruin: A List of Lethalities
Eliezer Yudkowsky
6mo
653
1039
Where I agree and disagree with Eliezer
paulfchristiano
6mo
205
455
Counterarguments to the basic AI x-risk case
KatjaGrace
2mo
122
437
What failure looks like
paulfchristiano
3y
49
432
Discussion with Eliezer Yudkowsky on AGI interventions
Rob Bensinger
1y
257
309
A central AI alignment problem: capabilities generalization, and the sharp left turn
So8res
6mo
48
266
What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)
Andrew_Critch
1y
60
253
Another (outer) alignment failure story
paulfchristiano
1y
38
217
Slow motion videos as AI risk intuition pumps
Andrew_Critch
6mo
36
205
[RETRACTED] It's time for EA leadership to pull the short-timelines fire alarm.
Not Relevant
8mo
165
192
Whole Brain Emulation: No Progress on C. elgans After 10 Years
niconiconi
1y
77
170
AGI ruin scenarios are likely (and disjunctive)
So8res
4mo
37
165
On A List of Lethalities
Zvi
6mo
48
165
AI Could Defeat All Of Us Combined
HoldenKarnofsky
6mo
29
233
A Quick Guide to Confronting Doom
Ruby
8mo
36
219
Some AI research areas and their relevance to existential safety
Andrew_Critch
2y
40
205
On saving one's world
Rob Bensinger
7mo
5
185
2021 AI Alignment Literature Review and Charity Comparison
Larks
12mo
26
170
The AI in a box boxes you
Stuart_Armstrong
12y
390
164
Semantic Stopsigns
Eliezer Yudkowsky
15y
111
150
Intergenerational trauma impeding cooperative existential safety efforts
Andrew_Critch
6mo
28
146
The Hero With A Thousand Chances
Eliezer Yudkowsky
13y
171
141
My current thoughts on the risks from SETI
Matthew Barnett
9mo
25
138
All Possible Views About Humanity's Future Are Wild
HoldenKarnofsky
1y
40
137
Beyond Astronomical Waste
Wei_Dai
4y
41
124
On infinite ethics
Joe Carlsmith
10mo
68
124
Leaving Google, Joining the Nucleic Acid Observatory
jefftk
6mo
4
119
If a tree falls on Sleeping Beauty...
ata
12y
28