Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
13671 posts
Rationality
World Modeling
Practical
World Optimization
Covid-19
Community
Fiction
Site Meta
Scholarship & Learning
Politics
Book Reviews
Open Threads
18722 posts
AI
AI Risk
GPT
AI Timelines
Decision Theory
Interpretability (ML & AI)
Machine Learning (ML)
AI Takeoff
Inner Alignment
Anthropics
Research Agendas
Language Models
424
Schelling fences on slippery slopes
Scott Alexander
10y
247
417
Making Vaccine
johnswentworth
1y
249
395
Rationalism before the Sequences
Eric Raymond
1y
80
360
Thoughts on the Singularity Institute (SI)
HoldenKarnofsky
10y
1287
357
Diseased thinking: dissolving questions about disease
Scott Alexander
12y
355
341
Reason as memetic immune disorder
PhilGoetz
13y
181
333
Bets, Bonds, and Kindergarteners
jefftk
1y
35
328
Generalizing From One Example
Scott Alexander
13y
416
322
It’s Probably Not Lithium
Natália Coelho Mendonça
5mo
181
320
Luck based medicine: my resentful story of becoming a medical miracle
Elizabeth
2mo
87
314
The Redaction Machine
Ben
3mo
35
313
Why the tails come apart
Thrasymachus
8y
100
313
Eight Short Studies On Excuses
Scott Alexander
12y
244
312
The Blue-Minimizing Robot
Scott Alexander
11y
162
515
Where I agree and disagree with Eliezer
paulfchristiano
6mo
205
405
AGI Ruin: A List of Lethalities
Eliezer Yudkowsky
6mo
653
296
DeepMind alignment team opinions on AGI ruin arguments
Vika
4mo
34
287
What DALL-E 2 can and cannot do
Swimmer963
7mo
305
281
Is AI Progress Impossible To Predict?
alyssavance
7mo
38
275
What should you change in response to an "emergency"? And AI risk
AnnaSalamon
5mo
60
249
Humans are very reliable agents
alyssavance
6mo
35
242
Two-year update on my personal AI timelines
Ajeya Cotra
4mo
60
230
A Mechanistic Interpretability Analysis of Grokking
Neel Nanda
4mo
39
228
Visible Thoughts Project and Bounty Announcement
So8res
1y
104
218
Discussion with Eliezer Yudkowsky on AGI interventions
Rob Bensinger
1y
257
218
DeepMind: Generally capable agents emerge from open-ended play
Daniel Kokotajlo
1y
53
218
Reward is not the optimization target
TurnTrout
4mo
97
217
Contra Hofstadter on GPT-3 Nonsense
rictic
6mo
22