Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
13671 posts
Rationality
World Modeling
Practical
World Optimization
Covid-19
Community
Fiction
Site Meta
Scholarship & Learning
Politics
Book Reviews
Open Threads
18722 posts
AI
AI Risk
GPT
AI Timelines
Decision Theory
Interpretability (ML & AI)
Machine Learning (ML)
AI Takeoff
Inner Alignment
Anthropics
Research Agendas
Language Models
689
Eight Short Studies On Excuses
Scott Alexander
12y
244
608
Preface
Eliezer Yudkowsky
7y
14
597
The Best Textbooks on Every Subject
lukeprog
11y
394
561
Making Vaccine
johnswentworth
1y
249
554
What an actually pessimistic containment strategy looks like
lc
8mo
136
524
Rationalism before the Sequences
Eric Raymond
1y
80
512
Schelling fences on slippery slopes
Scott Alexander
10y
247
452
Diseased thinking: dissolving questions about disease
Scott Alexander
12y
355
440
Pain is not the unit of Effort
alkjash
2y
83
428
Luck based medicine: my resentful story of becoming a medical miracle
Elizabeth
2mo
87
423
It’s Probably Not Lithium
Natália Coelho Mendonça
5mo
181
419
Humans are not automatically strategic
AnnaSalamon
12y
275
417
Reason as memetic immune disorder
PhilGoetz
13y
181
415
The Redaction Machine
Ben
3mo
35
777
Where I agree and disagree with Eliezer
paulfchristiano
6mo
205
724
AGI Ruin: A List of Lethalities
Eliezer Yudkowsky
6mo
653
472
Simulators
janus
3mo
103
364
chinchilla's wild implications
nostalgebraist
4mo
114
364
DeepMind alignment team opinions on AGI ruin arguments
Vika
4mo
34
351
What DALL-E 2 can and cannot do
Swimmer963
7mo
305
344
(My understanding of) What Everyone in Technical Alignment is Doing and Why
Thomas Larsen
3mo
83
338
A Mechanistic Interpretability Analysis of Grokking
Neel Nanda
4mo
39
336
Counterarguments to the basic AI x-risk case
KatjaGrace
2mo
122
325
Discussion with Eliezer Yudkowsky on AGI interventions
Rob Bensinger
1y
257
319
What failure looks like
paulfchristiano
3y
49
314
How To Get Into Independent Research On Alignment/Agency
johnswentworth
1y
33
310
Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
Ajeya Cotra
5mo
89
303
What should you change in response to an "emergency"? And AI risk
AnnaSalamon
5mo
60