Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
473 posts
Existential risk
Biosecurity
COVID-19 pandemic
History of effective altruism
Vaccines
Global catastrophic biological risk
Information hazard
Pandemic preparedness
The Precipice
Atomically precise manufacturing
Climate engineering
Research agendas, questions, and project lists
721 posts
AI risk
AI alignment
AI governance
AI safety
AI forecasting
Artificial intelligence
European Union
Transformative artificial intelligence
Information security
Standards and regulation
Eliezer Yudkowsky
Paul Christiano
390
Announcing Alvea—An EA COVID Vaccine Project
kyle_fish
10mo
25
389
Some observations from an EA-adjacent (?) charitable effort
patio11
11d
8
385
Concrete Biosecurity Projects (some of which could be big)
ASB
11mo
72
308
A Letter to the Bulletin of Atomic Scientists
John G. Halstead
27d
56
241
Overreacting to current events can be very costly
Kelsey Piper
2mo
71
217
Announcing the Nucleic Acid Observatory project for early detection of catastrophic biothreats
Will Bradshaw
7mo
3
206
List of Lists of Concrete Biosecurity Project Ideas
Tessa
5mo
5
197
EA megaprojects continued
mariushobbhahn
1y
49
195
COVID-19 brief for friends and family
eca
2y
68
195
Stop Thinking about FTX. Think About Getting Zika Instead.
jeberts
1mo
5
180
Most* small probabilities aren't pascalian
Gregory Lewis
4mo
20
166
Experimental longtermism: theory needs data
Jan_Kulveit
9mo
10
164
Lord Martin Rees: an appreciation
HaydnBelfield
1mo
18
160
Map of Biosecurity Interventions
James Lin
1mo
29
300
How to pursue a career in technical AI alignment
CharlieRS
6mo
7
295
Why EAs are skeptical about AI Safety
Lukas Trötzmüller
5mo
31
280
On Deference and Yudkowsky's AI Risk Estimates
Ben Garfinkel
6mo
188
244
Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest
Jason Schukraft
29d
18
243
AI Governance: Opportunity and Theory of Impact
Allan Dafoe
2y
16
232
My Most Likely Reason to Die Young is AI X-Risk
AISafetyIsNotLongtermist
5mo
62
214
How I failed to form views on AI safety
Ada-Maaria Hyvärinen
8mo
72
194
Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
Ajeya
5mo
12
190
Reasons I’ve been hesitant about high levels of near-ish AI risk
elifland
5mo
16
184
Announcing Epoch: A research organization investigating the road to Transformative AI
Jaime Sevilla
5mo
11
180
AGI Ruin: A List of Lethalities
EliezerYudkowsky
6mo
55
177
AI Risk is like Terminator; Stop Saying it's Not
skluug
9mo
43
168
Information security careers for GCR reduction
ClaireZabel
3y
34
154
2019 AI Alignment Literature Review and Charity Comparison
Larks
3y
28