Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
299 posts
AI risk
AI safety
Artificial intelligence
Swiss Existential Risk Initiative
Ethics of artificial intelligence
Digital person
Dual-use
Conjecture
AI boxing
David Chalmers
85 posts
AI forecasting
Eliezer Yudkowsky
Paul Christiano
AI takeoff
Epoch
Fund for Alignment Research (FAR)
326
Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest
Jason Schukraft
29d
18
261
Why EAs are skeptical about AI Safety
Lukas Trötzmüller
5mo
31
236
Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
Ajeya
5mo
12
220
My Most Likely Reason to Die Young is AI X-Risk
AISafetyIsNotLongtermist
5mo
62
210
Reasons I’ve been hesitant about high levels of near-ish AI risk
elifland
5mo
16
186
A challenge for AGI organizations, and a challenge for readers
RobBensinger
19d
13
171
AI Risk is like Terminator; Stop Saying it's Not
skluug
9mo
43
170
How I failed to form views on AI safety
Ada-Maaria Hyvärinen
8mo
72
168
AI Could Defeat All Of Us Combined
Holden Karnofsky
6mo
11
152
How to pursue a career in technical AI alignment
CharlieRS
6mo
7
145
Transcripts of interviews with AI researchers
Vael Gates
7mo
13
144
AGI and Lock-In
Lukas_Finnveden
1mo
26
144
On how various plans miss the hard bits of the alignment challenge
So8res
5mo
13
144
A tale of 2.5 orthogonality theses
Arepo
7mo
31
234
On Deference and Yudkowsky's AI Risk Estimates
Ben Garfinkel
6mo
188
197
Samotsvety's AI risk forecasts
elifland
3mo
30
180
Announcing Epoch: A research organization investigating the road to Transformative AI
Jaime Sevilla
5mo
11
165
Survey on AI existential risk scenarios
Sam Clarke
1y
7
95
2022 AI expert survey results
Zach Stein-Perlman
4mo
7
92
Disagreement with bio anchors that lead to shorter timelines
mariushobbhahn
1mo
1
91
Roodman's Thoughts on Biological Anchors
lukeprog
3mo
7
88
AI Timelines: Where the Arguments, and the "Experts," Stand
Holden Karnofsky
1y
2
85
A Bird's Eye View of the ML Field [Pragmatic AI Safety #2]
ThomasW
7mo
2
75
Introducing the Fund for Alignment Research (We're Hiring!)
AdamGleave
5mo
3
73
Report on Semi-informative Priors for AI timelines (Open Philanthropy)
Tom_Davidson
1y
6
73
Ajeya's TAI timeline shortened from 2050 to 2040
Zach Stein-Perlman
4mo
2
71
Discussion with Eliezer Yudkowsky on AGI interventions
RobBensinger
1y
35
71
AI Forecasting Research Ideas
Jaime Sevilla
1mo
1