Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
303 posts
AI Risk
Whole Brain Emulation
Threat Models
Q&A (format)
Reading Group
Economic Consequences of AGI
Superintelligence
Sharp Left Turn
Multipolar Scenarios
Technological Unemployment
Missing Moods
Seed AI
460 posts
Anthropics
Existential Risk
Academic Papers
Sleeping Beauty Paradox
Paradoxes
Great Filter
Space Exploration & Colonization
Simulation Hypothesis
Longtermism
Extraterrestrial Life
Pascal's Mugging
Grabby Aliens
42
AI Neorealism: a threat model & success criterion for existential safety
davidad
5d
0
77
AI Safety Seems Hard to Measure
HoldenKarnofsky
12d
5
455
Counterarguments to the basic AI x-risk case
KatjaGrace
2mo
122
64
Who are some prominent reasonable people who are confident that AI won't kill everyone?
Optimization Process
15d
40
1039
Where I agree and disagree with Eliezer
paulfchristiano
6mo
205
113
AI will change the world, but won’t take it over by playing “3-dimensional chess”.
boazbarak
28d
86
1043
AGI Ruin: A List of Lethalities
Eliezer Yudkowsky
6mo
653
103
Meta AI announces Cicero: Human-Level Diplomacy play (with dialogue)
Jacy Reese Anthis
28d
64
148
Clarifying AI X-risk
zac_kenton
1mo
23
117
Am I secretly excited for AI getting weird?
porby
1mo
4
100
All AGI Safety questions welcome (especially basic ones) [~monthly thread]
Robert Miles
1mo
100
28
Apply to attend winter AI alignment workshops (Dec 28-30 & Jan 3-5) near Berkeley
Akash
19d
1
21
Aligned Behavior is not Evidence of Alignment Past a Certain Level of Intelligence
Ronny Fernandez
15d
5
14
How is the "sharp left turn defined"?
Chris_Leong
12d
3
13
all claw, no world — and other thoughts on the universal distribution
carado
6d
0
55
Could a single alien message destroy us?
Writer
25d
23
35
Three Fables of Magical Girls and Longtermism
Ulisse Mini
18d
11
68
Far-UVC Light Update: No, LEDs are not around the corner (tweetstorm)
Davidmanheim
1mo
27
84
Don't leave your fingerprints on the future
So8res
2mo
32
205
On saving one's world
Rob Bensinger
7mo
5
32
4 Key Assumptions in AI Safety
Prometheus
1mo
5
233
A Quick Guide to Confronting Doom
Ruby
8mo
36
150
Intergenerational trauma impeding cooperative existential safety efforts
Andrew_Critch
6mo
28
25
The Mirror Chamber: A short story exploring the anthropic measure function and why it can matter
mako yass
1mo
13
124
Leaving Google, Joining the Nucleic Acid Observatory
jefftk
6mo
4
6
AI Safety in a Vulnerable World: Requesting Feedback on Preliminary Thoughts
Jordan Arel
14d
2
20
Intercept article about lab accidents
ChristianKl
1mo
9
13
Introducing The Logical Foundation, A Plan to End Poverty With Guaranteed Income
Michael Simm
1mo
23