Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
299 posts
AI risk
AI safety
Artificial intelligence
Swiss Existential Risk Initiative
Ethics of artificial intelligence
Digital person
Dual-use
Conjecture
AI boxing
David Chalmers
85 posts
AI forecasting
Eliezer Yudkowsky
Paul Christiano
AI takeoff
Epoch
Fund for Alignment Research (FAR)
285
Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest
Jason Schukraft
29d
18
278
Why EAs are skeptical about AI Safety
Lukas Trötzmüller
5mo
31
226
How to pursue a career in technical AI alignment
CharlieRS
6mo
7
226
My Most Likely Reason to Die Young is AI X-Risk
AISafetyIsNotLongtermist
5mo
62
215
Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
Ajeya
5mo
12
200
Reasons I’ve been hesitant about high levels of near-ish AI risk
elifland
5mo
16
192
How I failed to form views on AI safety
Ada-Maaria Hyvärinen
8mo
72
174
AI Risk is like Terminator; Stop Saying it's Not
skluug
9mo
43
165
A challenge for AGI organizations, and a challenge for readers
RobBensinger
19d
13
159
AGI Ruin: A List of Lethalities
EliezerYudkowsky
6mo
55
142
AI Could Defeat All Of Us Combined
Holden Karnofsky
6mo
11
135
My personal cruxes for working on AI safety
Buck
2y
35
134
Transcripts of interviews with AI researchers
Vael Gates
7mo
13
132
A tale of 2.5 orthogonality theses
Arepo
7mo
31
257
On Deference and Yudkowsky's AI Risk Estimates
Ben Garfinkel
6mo
188
182
Announcing Epoch: A research organization investigating the road to Transformative AI
Jaime Sevilla
5mo
11
163
Samotsvety's AI risk forecasts
elifland
3mo
30
148
Survey on AI existential risk scenarios
Sam Clarke
1y
7
89
A Bird's Eye View of the ML Field [Pragmatic AI Safety #2]
ThomasW
7mo
2
87
2022 AI expert survey results
Zach Stein-Perlman
4mo
7
83
AI timelines via bioanchors: the debate in one place
Will Aldred
4mo
6
78
Disagreement with bio anchors that lead to shorter timelines
mariushobbhahn
1mo
1
75
AI Timelines: Where the Arguments, and the "Experts," Stand
Holden Karnofsky
1y
2
75
A concern about the “evolutionary anchor” of Ajeya Cotra’s report on AI timelines.
NunoSempere
4mo
43
74
Introducing the Fund for Alignment Research (We're Hiring!)
AdamGleave
5mo
3
72
Roodman's Thoughts on Biological Anchors
lukeprog
3mo
7
69
AI Forecasting Research Ideas
Jaime Sevilla
1mo
1
62
Report on Semi-informative Priors for AI timelines (Open Philanthropy)
Tom_Davidson
1y
6