Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
3083 posts
AI
GPT
AI Timelines
Machine Learning (ML)
AI Takeoff
Interpretability (ML & AI)
Language Models
Conjecture (org)
Careers
Instrumental Convergence
Iterated Amplification
Art
763 posts
Anthropics
Existential Risk
Whole Brain Emulation
Sleeping Beauty Paradox
Threat Models
Academic Papers
Space Exploration & Colonization
Great Filter
Paradoxes
Extraterrestrial Life
Pascal's Mugging
Longtermism
296
DeepMind alignment team opinions on AGI ruin arguments
Vika
4mo
34
287
What DALL-E 2 can and cannot do
Swimmer963
7mo
305
275
What should you change in response to an "emergency"? And AI risk
AnnaSalamon
5mo
60
242
Two-year update on my personal AI timelines
Ajeya Cotra
4mo
60
230
A Mechanistic Interpretability Analysis of Grokking
Neel Nanda
4mo
39
228
Visible Thoughts Project and Bounty Announcement
So8res
1y
104
218
DeepMind: Generally capable agents emerge from open-ended play
Daniel Kokotajlo
1y
53
217
Contra Hofstadter on GPT-3 Nonsense
rictic
6mo
22
216
Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
Ajeya Cotra
5mo
89
207
chinchilla's wild implications
nostalgebraist
4mo
114
207
A challenge for AGI organizations, and a challenge for readers
Rob Bensinger
19d
30
203
larger language models may disappoint you [or, an eternally unfinished draft]
nostalgebraist
1y
29
202
Hiring engineers and researchers to help align GPT-3
paulfchristiano
2y
14
202
Safetywashing
Adam Scholl
5mo
17
515
Where I agree and disagree with Eliezer
paulfchristiano
6mo
205
405
AGI Ruin: A List of Lethalities
Eliezer Yudkowsky
6mo
653
218
Discussion with Eliezer Yudkowsky on AGI interventions
Rob Bensinger
1y
257
217
Counterarguments to the basic AI x-risk case
KatjaGrace
2mo
122
215
A Quick Guide to Confronting Doom
Ruby
8mo
36
201
Slow motion videos as AI risk intuition pumps
Andrew_Critch
6mo
36
201
What failure looks like
paulfchristiano
3y
49
197
A central AI alignment problem: capabilities generalization, and the sharp left turn
So8res
6mo
48
193
We Haven't Uploaded Worms
jefftk
7y
19
180
Reply to Holden on 'Tool AI'
Eliezer Yudkowsky
10y
357
179
Some AI research areas and their relevance to existential safety
Andrew_Critch
2y
40
178
Whole Brain Emulation: No Progress on C. elgans After 10 Years
niconiconi
1y
77
175
On saving one's world
Rob Bensinger
7mo
5
171
AI Could Defeat All Of Us Combined
HoldenKarnofsky
6mo
29