Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
473 posts
Existential risk
Biosecurity
COVID-19 pandemic
History of effective altruism
Vaccines
Global catastrophic biological risk
Information hazard
Pandemic preparedness
The Precipice
Atomically precise manufacturing
Climate engineering
Research agendas, questions, and project lists
721 posts
AI risk
AI alignment
AI governance
AI safety
AI forecasting
Artificial intelligence
European Union
Transformative artificial intelligence
Information security
Standards and regulation
Eliezer Yudkowsky
Paul Christiano
41
Existential risk mitigation: What I worry about when there are only bad options
MMMaas
1d
2
385
Some observations from an EA-adjacent (?) charitable effort
patio11
11d
8
139
What Rethink Priorities General Longtermism Team Did in 2022, and Updates in Light of the Current Situation
Linch
6d
8
98
EA's Achievements in 2022
ElliotJDavies
6d
7
31
Sir Gavin and the green sky
Gavin
3d
0
341
A Letter to the Bulletin of Atomic Scientists
John G. Halstead
27d
56
46
Announcing ERA: a spin-off from CERI
Nandini Shiralkar
7d
7
22
COVID-19 in rural Balochistan, Pakistan: Two interviews from May 2020
NunoSempere
4d
2
20
Undesired dystopia: the parables of island rats and cut-throat capitalists
Will Aldred
3d
0
164
Review: What We Owe The Future
Kelsey Piper
29d
3
171
Stop Thinking about FTX. Think About Getting Zika Instead.
jeberts
1mo
5
165
Delay, Detect, Defend: Preparing for a Future in which Thousands Can Release New Pandemics by Kevin Esvelt
Jeremy
1mo
7
73
Come get malaria with me?
jeberts
21d
4
32
Pandemic prevention as fire-fighting by Richard Williamson (Alvea.bio) for Works in Progress
Nick Whitaker
12d
1
49
High-level hopes for AI alignment
Holden Karnofsky
21h
1
66
AGI Timelines in Governance: Different Strategies for Different Timeframes
simeon_c
1d
17
38
The ‘Old AI’: Lessons for AI governance from early electricity regulation
Sam Clarke
1d
1
49
We should say more than “x-risk is high”
OllieBase
4d
6
45
Concrete actionable policies relevant to AI safety (written 2019)
weeatquince
4d
0
30
How important are accurate AI timelines for the optimal spending schedule on AI risk interventions?
Tristan Cook
4d
0
285
Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest
Jason Schukraft
29d
18
78
AI Safety Seems Hard to Measure
Holden Karnofsky
9d
2
165
A challenge for AGI organizations, and a challenge for readers
RobBensinger
19d
13
28
How would you estimate the value of delaying AGI by 1 day, in marginal GiveWell donations?
AnonymousAccount
4d
19
14
What we owe the microbiome
TeddyW
3d
0
77
Thoughts on AGI organizations and capabilities work
RobBensinger
13d
7
60
Main paths to impact in EU AI Policy
JOMG_Monnet
12d
1
28
Existential AI Safety is NOT separate from near-term applications
stecas
7d
9