Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
303 posts
AI Risk
Whole Brain Emulation
Threat Models
Q&A (format)
Reading Group
Economic Consequences of AGI
Superintelligence
Sharp Left Turn
Multipolar Scenarios
Technological Unemployment
Missing Moods
Seed AI
460 posts
Anthropics
Existential Risk
Academic Papers
Sleeping Beauty Paradox
Paradoxes
Great Filter
Space Exploration & Colonization
Simulation Hypothesis
Longtermism
Extraterrestrial Life
Pascal's Mugging
Grabby Aliens
515
Where I agree and disagree with Eliezer
paulfchristiano
6mo
205
405
AGI Ruin: A List of Lethalities
Eliezer Yudkowsky
6mo
653
218
Discussion with Eliezer Yudkowsky on AGI interventions
Rob Bensinger
1y
257
217
Counterarguments to the basic AI x-risk case
KatjaGrace
2mo
122
201
Slow motion videos as AI risk intuition pumps
Andrew_Critch
6mo
36
201
What failure looks like
paulfchristiano
3y
49
197
A central AI alignment problem: capabilities generalization, and the sharp left turn
So8res
6mo
48
193
We Haven't Uploaded Worms
jefftk
7y
19
180
Reply to Holden on 'Tool AI'
Eliezer Yudkowsky
10y
357
178
Whole Brain Emulation: No Progress on C. elgans After 10 Years
niconiconi
1y
77
171
AI Could Defeat All Of Us Combined
HoldenKarnofsky
6mo
29
167
Another (outer) alignment failure story
paulfchristiano
1y
38
143
On A List of Lethalities
Zvi
6mo
48
140
What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)
Andrew_Critch
1y
60
215
A Quick Guide to Confronting Doom
Ruby
8mo
36
179
Some AI research areas and their relevance to existential safety
Andrew_Critch
2y
40
175
On saving one's world
Rob Bensinger
7mo
5
150
The AI in a box boxes you
Stuart_Armstrong
12y
390
143
2021 AI Alignment Literature Review and Charity Comparison
Larks
12mo
26
142
All Possible Views About Humanity's Future Are Wild
HoldenKarnofsky
1y
40
141
If a tree falls on Sleeping Beauty...
ata
12y
28
131
Jaan Tallinn's 2020 Philanthropy Overview
jaan
1y
4
122
"Taking AI Risk Seriously" (thoughts by Critch)
Raemon
4y
68
116
On infinite ethics
Joe Carlsmith
10mo
68
116
The Hero With A Thousand Chances
Eliezer Yudkowsky
13y
171
111
Tiling Agents for Self-Modifying AI (OPFAI #2)
Eliezer Yudkowsky
9y
259
108
Bayesian Adjustment Does Not Defeat Existential Risk Charity
steven0461
9y
92
104
Leaving Google, Joining the Nucleic Acid Observatory
jefftk
6mo
4