Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
473 posts
Existential risk
Biosecurity
COVID-19 pandemic
History of effective altruism
Vaccines
Global catastrophic biological risk
Information hazard
Pandemic preparedness
The Precipice
Atomically precise manufacturing
Climate engineering
Research agendas, questions, and project lists
721 posts
AI risk
AI alignment
AI governance
AI safety
AI forecasting
Artificial intelligence
European Union
Transformative artificial intelligence
Information security
Standards and regulation
Eliezer Yudkowsky
Paul Christiano
385
Some observations from an EA-adjacent (?) charitable effort
patio11
11d
8
376
Announcing Alvea—An EA COVID Vaccine Project
kyle_fish
10mo
25
372
Concrete Biosecurity Projects (some of which could be big)
ASB
11mo
72
341
A Letter to the Bulletin of Atomic Scientists
John G. Halstead
27d
56
280
Overreacting to current events can be very costly
Kelsey Piper
2mo
71
208
Announcing the Nucleic Acid Observatory project for early detection of catastrophic biothreats
Will Bradshaw
7mo
3
200
Most* small probabilities aren't pascalian
Gregory Lewis
4mo
20
186
List of Lists of Concrete Biosecurity Project Ideas
Tessa
5mo
5
184
Experimental longtermism: theory needs data
Jan_Kulveit
9mo
10
177
EA megaprojects continued
mariushobbhahn
1y
49
174
COVID: How did we do? How can we know?
Ghost_of_Li_Wenliang
1y
48
172
Lord Martin Rees: an appreciation
HaydnBelfield
1mo
18
171
Stop Thinking about FTX. Think About Getting Zika Instead.
jeberts
1mo
5
165
Delay, Detect, Defend: Preparing for a Future in which Thousands Can Release New Pandemics by Kevin Esvelt
Jeremy
1mo
7
285
Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest
Jason Schukraft
29d
18
278
Why EAs are skeptical about AI Safety
Lukas Trötzmüller
5mo
31
257
On Deference and Yudkowsky's AI Risk Estimates
Ben Garfinkel
6mo
188
226
How to pursue a career in technical AI alignment
CharlieRS
6mo
7
226
My Most Likely Reason to Die Young is AI X-Risk
AISafetyIsNotLongtermist
5mo
62
220
AI Governance: Opportunity and Theory of Impact
Allan Dafoe
2y
16
215
Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
Ajeya
5mo
12
200
Reasons I’ve been hesitant about high levels of near-ish AI risk
elifland
5mo
16
192
How I failed to form views on AI safety
Ada-Maaria Hyvärinen
8mo
72
186
Information security careers for GCR reduction
ClaireZabel
3y
34
182
Announcing Epoch: A research organization investigating the road to Transformative AI
Jaime Sevilla
5mo
11
174
AI Risk is like Terminator; Stop Saying it's Not
skluug
9mo
43
169
A personal take on longtermist AI governance
lukeprog
1y
5
165
A challenge for AGI organizations, and a challenge for readers
RobBensinger
19d
13