Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
299 posts
AI risk
AI safety
Artificial intelligence
Swiss Existential Risk Initiative
Ethics of artificial intelligence
Digital person
Dual-use
Conjecture
AI boxing
David Chalmers
85 posts
AI forecasting
Eliezer Yudkowsky
Paul Christiano
AI takeoff
Epoch
Fund for Alignment Research (FAR)
300
How to pursue a career in technical AI alignment
CharlieRS
6mo
7
295
Why EAs are skeptical about AI Safety
Lukas Trötzmüller
5mo
31
244
Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest
Jason Schukraft
29d
18
232
My Most Likely Reason to Die Young is AI X-Risk
AISafetyIsNotLongtermist
5mo
62
214
How I failed to form views on AI safety
Ada-Maaria Hyvärinen
8mo
72
194
Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
Ajeya
5mo
12
190
Reasons I’ve been hesitant about high levels of near-ish AI risk
elifland
5mo
16
180
AGI Ruin: A List of Lethalities
EliezerYudkowsky
6mo
55
177
AI Risk is like Terminator; Stop Saying it's Not
skluug
9mo
43
146
My personal cruxes for working on AI safety
Buck
2y
35
144
A challenge for AGI organizations, and a challenge for readers
RobBensinger
19d
13
140
DeepMind’s generalist AI, Gato: A non-technical explainer
frances_lorenz
7mo
13
135
Why AI alignment could be hard with modern deep learning
Ajeya
1y
16
126
AI safety starter pack
mariushobbhahn
8mo
11
280
On Deference and Yudkowsky's AI Risk Estimates
Ben Garfinkel
6mo
188
184
Announcing Epoch: A research organization investigating the road to Transformative AI
Jaime Sevilla
5mo
11
131
Survey on AI existential risk scenarios
Sam Clarke
1y
7
129
Samotsvety's AI risk forecasts
elifland
3mo
30
101
AI timelines via bioanchors: the debate in one place
Will Aldred
4mo
6
93
A Bird's Eye View of the ML Field [Pragmatic AI Safety #2]
ThomasW
7mo
2
80
A concern about the “evolutionary anchor” of Ajeya Cotra’s report on AI timelines.
NunoSempere
4mo
43
79
2022 AI expert survey results
Zach Stein-Perlman
4mo
7
73
Introducing the Fund for Alignment Research (We're Hiring!)
AdamGleave
5mo
3
67
AI Forecasting Research Ideas
Jaime Sevilla
1mo
1
64
Disagreement with bio anchors that lead to shorter timelines
mariushobbhahn
1mo
1
62
AI Timelines: Where the Arguments, and the "Experts," Stand
Holden Karnofsky
1y
2
54
MIRI announces new "Death With Dignity" strategy (Yudkowsky, 2022)
Will Aldred
5mo
0
53
Roodman's Thoughts on Biological Anchors
lukeprog
3mo
7