Go Back
Choose this branch
You can't go any further
meritocratic
regular
democratic
hot
top
alive
384 posts
AI risk
AI safety
AI forecasting
Artificial intelligence
Eliezer Yudkowsky
Paul Christiano
Swiss Existential Risk Initiative
AI takeoff
Digital person
Ethics of artificial intelligence
Dual-use
Epoch
178 posts
AI alignment
300
How to pursue a career in technical AI alignment
CharlieRS
6mo
7
295
Why EAs are skeptical about AI Safety
Lukas Trötzmüller
5mo
31
280
On Deference and Yudkowsky's AI Risk Estimates
Ben Garfinkel
6mo
188
244
Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest
Jason Schukraft
29d
18
232
My Most Likely Reason to Die Young is AI X-Risk
AISafetyIsNotLongtermist
5mo
62
214
How I failed to form views on AI safety
Ada-Maaria Hyvärinen
8mo
72
194
Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
Ajeya
5mo
12
190
Reasons I’ve been hesitant about high levels of near-ish AI risk
elifland
5mo
16
184
Announcing Epoch: A research organization investigating the road to Transformative AI
Jaime Sevilla
5mo
11
180
AGI Ruin: A List of Lethalities
EliezerYudkowsky
6mo
55
177
AI Risk is like Terminator; Stop Saying it's Not
skluug
9mo
43
146
My personal cruxes for working on AI safety
Buck
2y
35
144
A challenge for AGI organizations, and a challenge for readers
RobBensinger
19d
13
140
DeepMind’s generalist AI, Gato: A non-technical explainer
frances_lorenz
7mo
13
154
2019 AI Alignment Literature Review and Charity Comparison
Larks
3y
28
143
2018 AI Alignment Literature Review and Charity Comparison
Larks
4y
28
135
Ben Garfinkel: How sure are we about this AI stuff?
Ben Garfinkel
3y
17
134
How might we align transformative AI if it’s developed very soon?
Holden Karnofsky
3mo
16
130
Lessons learned from talking to >100 academics about AI safety
mariushobbhahn
2mo
17
102
AGI Safety Fundamentals curriculum and application
richard_ngo
1y
20
94
Hiring engineers and researchers to help align GPT-3
Paul_Christiano
2y
19
85
Alignment 201 curriculum
richard_ngo
2mo
8
83
Disentangling arguments for the importance of AI safety
richard_ngo
3y
14
82
How much EA analysis of AI safety as a cause area exists?
richard_ngo
3y
20
81
What are the coolest topics in AI safety, to a hopelessly pure mathematician?
Jenny K E
7mo
30
78
Paul Christiano: Current work in AI alignment
EA Global
2y
1
73
7 traps that (we think) new alignment researchers often fall into
Akash
2mo
13
71
AGI safety from first principles
richard_ngo
2y
10