Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
303 posts
AI Risk
Whole Brain Emulation
Threat Models
Q&A (format)
Reading Group
Economic Consequences of AGI
Superintelligence
Sharp Left Turn
Multipolar Scenarios
Technological Unemployment
Missing Moods
Seed AI
460 posts
Anthropics
Existential Risk
Academic Papers
Sleeping Beauty Paradox
Paradoxes
Great Filter
Space Exploration & Colonization
Simulation Hypothesis
Longtermism
Extraterrestrial Life
Pascal's Mugging
Grabby Aliens
777
Where I agree and disagree with Eliezer
paulfchristiano
6mo
205
724
AGI Ruin: A List of Lethalities
Eliezer Yudkowsky
6mo
653
336
Counterarguments to the basic AI x-risk case
KatjaGrace
2mo
122
325
Discussion with Eliezer Yudkowsky on AGI interventions
Rob Bensinger
1y
257
319
What failure looks like
paulfchristiano
3y
49
253
A central AI alignment problem: capabilities generalization, and the sharp left turn
So8res
6mo
48
210
Another (outer) alignment failure story
paulfchristiano
1y
38
209
Slow motion videos as AI risk intuition pumps
Andrew_Critch
6mo
36
203
What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)
Andrew_Critch
1y
60
185
Whole Brain Emulation: No Progress on C. elgans After 10 Years
niconiconi
1y
77
168
AI Could Defeat All Of Us Combined
HoldenKarnofsky
6mo
29
155
We Haven't Uploaded Worms
jefftk
7y
19
154
On A List of Lethalities
Zvi
6mo
48
152
Reply to Holden on 'Tool AI'
Eliezer Yudkowsky
10y
357
224
A Quick Guide to Confronting Doom
Ruby
8mo
36
199
Some AI research areas and their relevance to existential safety
Andrew_Critch
2y
40
190
On saving one's world
Rob Bensinger
7mo
5
164
2021 AI Alignment Literature Review and Charity Comparison
Larks
12mo
26
160
The AI in a box boxes you
Stuart_Armstrong
12y
390
140
All Possible Views About Humanity's Future Are Wild
HoldenKarnofsky
1y
40
131
The Hero With A Thousand Chances
Eliezer Yudkowsky
13y
171
130
If a tree falls on Sleeping Beauty...
ata
12y
28
126
Intergenerational trauma impeding cooperative existential safety efforts
Andrew_Critch
6mo
28
122
Semantic Stopsigns
Eliezer Yudkowsky
15y
111
120
On infinite ethics
Joe Carlsmith
10mo
68
120
My current thoughts on the risks from SETI
Matthew Barnett
9mo
25
119
Beyond Astronomical Waste
Wei_Dai
4y
41
114
Leaving Google, Joining the Nucleic Acid Observatory
jefftk
6mo
4