Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
303 posts
AI Risk
Whole Brain Emulation
Threat Models
Q&A (format)
Reading Group
Economic Consequences of AGI
Superintelligence
Sharp Left Turn
Multipolar Scenarios
Technological Unemployment
Missing Moods
Seed AI
460 posts
Anthropics
Existential Risk
Academic Papers
Sleeping Beauty Paradox
Paradoxes
Great Filter
Space Exploration & Colonization
Simulation Hypothesis
Longtermism
Extraterrestrial Life
Pascal's Mugging
Grabby Aliens
39
AI Neorealism: a threat model & success criterion for existential safety
davidad
5d
0
68
AI Safety Seems Hard to Measure
HoldenKarnofsky
12d
5
336
Counterarguments to the basic AI x-risk case
KatjaGrace
2mo
122
61
Who are some prominent reasonable people who are confident that AI won't kill everyone?
Optimization Process
15d
40
103
AI will change the world, but won’t take it over by playing “3-dimensional chess”.
boazbarak
28d
86
95
Meta AI announces Cicero: Human-Level Diplomacy play (with dialogue)
Jacy Reese Anthis
28d
64
777
Where I agree and disagree with Eliezer
paulfchristiano
6mo
205
724
AGI Ruin: A List of Lethalities
Eliezer Yudkowsky
6mo
653
102
Clarifying AI X-risk
zac_kenton
1mo
23
98
Am I secretly excited for AI getting weird?
porby
1mo
4
36
Refining the Sharp Left Turn threat model, part 2: applying alignment techniques
Vika
25d
4
25
Apply to attend winter AI alignment workshops (Dec 28-30 & Jan 3-5) near Berkeley
Akash
19d
1
19
Aligned Behavior is not Evidence of Alignment Past a Certain Level of Intelligence
Ronny Fernandez
15d
5
67
All AGI Safety questions welcome (especially basic ones) [~monthly thread]
Robert Miles
1mo
100
14
all claw, no world — and other thoughts on the universal distribution
carado
6d
0
59
Could a single alien message destroy us?
Writer
25d
23
29
Three Fables of Magical Girls and Longtermism
Ulisse Mini
18d
11
70
Far-UVC Light Update: No, LEDs are not around the corner (tweetstorm)
Davidmanheim
1mo
27
93
Don't leave your fingerprints on the future
So8res
2mo
32
190
On saving one's world
Rob Bensinger
7mo
5
224
A Quick Guide to Confronting Doom
Ruby
8mo
36
30
The Mirror Chamber: A short story exploring the anthropic measure function and why it can matter
mako yass
1mo
13
23
Intercept article about lab accidents
ChristianKl
1mo
9
126
Intergenerational trauma impeding cooperative existential safety efforts
Andrew_Critch
6mo
28
114
Leaving Google, Joining the Nucleic Acid Observatory
jefftk
6mo
4
20
4 Key Assumptions in AI Safety
Prometheus
1mo
5
87
In defense of flailing, with foreword by Bill Burr
lc
6mo
8
164
2021 AI Alignment Literature Review and Charity Comparison
Larks
12mo
26