Go Back
You can't go any further
Choose this branch
meritocratic
regular
democratic
hot
top
alive
143 posts
AI risk
100 posts
AI safety
AI boxing
326
Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest
Jason Schukraft
29d
18
261
Why EAs are skeptical about AI Safety
Lukas Trötzmüller
5mo
31
236
Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
Ajeya
5mo
12
220
My Most Likely Reason to Die Young is AI X-Risk
AISafetyIsNotLongtermist
5mo
62
210
Reasons I’ve been hesitant about high levels of near-ish AI risk
elifland
5mo
16
171
AI Risk is like Terminator; Stop Saying it's Not
skluug
9mo
43
168
AI Could Defeat All Of Us Combined
Holden Karnofsky
6mo
11
152
How to pursue a career in technical AI alignment
CharlieRS
6mo
7
145
Transcripts of interviews with AI researchers
Vael Gates
7mo
13
144
AGI and Lock-In
Lukas_Finnveden
1mo
26
144
On how various plans miss the hard bits of the alignment challenge
So8res
5mo
13
144
A tale of 2.5 orthogonality theses
Arepo
7mo
31
141
[linkpost] Christiano on agreement/disagreement with Yudkowsky's "List of Lethalities"
Owen Cotton-Barratt
6mo
1
139
How I Formed My Own Views About AI Safety
Neel Nanda
9mo
12
186
A challenge for AGI organizations, and a challenge for readers
RobBensinger
19d
13
170
How I failed to form views on AI safety
Ada-Maaria Hyvärinen
8mo
72
108
AI safety starter pack
mariushobbhahn
8mo
11
100
13 Very Different Stances on AGI
Ozzie Gooen
11mo
27
82
How to become an AI safety researcher
peterbarnett
8mo
15
82
All AGI Safety questions welcome (especially basic ones) [~monthly thread]
robertskmiles
1mo
94
81
AI Safety Seems Hard to Measure
Holden Karnofsky
9d
2
81
Do AI companies make their safety researchers sign a non-disparagement clause?
Ofer
3mo
2
79
Begging, Pleading AI Orgs to Comment on NIST AI Risk Management Framework
Bridges
8mo
4
77
Anonymous advice: If you want to reduce AI risk, should you take roles that advance AI capabilities?
Benjamin Hilton
2mo
9
74
The Parable of the Boy Who Cried 5% Chance of Wolf
Kat Woods
4mo
8
67
AI Safety Ideas: A collaborative AI safety research platform
Apart Research
2mo
13
66
Announcing AI Alignment Awards: $100k research contests about goal misgeneralization & corrigibility
Akash
28d
1
65
What does it mean for an AGI to be 'safe'?
So8res
2mo
21