Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
473 posts
Existential risk
Biosecurity
COVID-19 pandemic
History of effective altruism
Vaccines
Global catastrophic biological risk
Information hazard
Pandemic preparedness
The Precipice
Atomically precise manufacturing
Climate engineering
Research agendas, questions, and project lists
721 posts
AI risk
AI alignment
AI governance
AI safety
AI forecasting
Artificial intelligence
European Union
Transformative artificial intelligence
Information security
Standards and regulation
Eliezer Yudkowsky
Paul Christiano
50
Existential risk mitigation: What I worry about when there are only bad options
MMMaas
1d
2
381
Some observations from an EA-adjacent (?) charitable effort
patio11
11d
8
165
What Rethink Priorities General Longtermism Team Did in 2022, and Updates in Light of the Current Situation
Linch
6d
8
111
EA's Achievements in 2022
ElliotJDavies
6d
7
37
Sir Gavin and the green sky
Gavin
3d
0
374
A Letter to the Bulletin of Atomic Scientists
John G. Halstead
27d
56
33
COVID-19 in rural Balochistan, Pakistan: Two interviews from May 2020
NunoSempere
4d
2
48
Announcing ERA: a spin-off from CERI
Nandini Shiralkar
7d
7
188
Review: What We Owe The Future
Kelsey Piper
29d
3
186
Delay, Detect, Defend: Preparing for a Future in which Thousands Can Release New Pandemics by Kevin Esvelt
Jeremy
1mo
7
147
Stop Thinking about FTX. Think About Getting Zika Instead.
jeberts
1mo
5
9
Undesired dystopia: the parables of island rats and cut-throat capitalists
Will Aldred
3d
0
319
Overreacting to current events can be very costly
Kelsey Piper
2mo
71
70
Come get malaria with me?
jeberts
21d
4
64
High-level hopes for AI alignment
Holden Karnofsky
21h
1
65
AGI Timelines in Governance: Different Strategies for Different Timeframes
simeon_c
1d
17
40
The ‘Old AI’: Lessons for AI governance from early electricity regulation
Sam Clarke
1d
1
60
We should say more than “x-risk is high”
OllieBase
4d
6
57
Concrete actionable policies relevant to AI safety (written 2019)
weeatquince
4d
0
326
Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest
Jason Schukraft
29d
18
186
A challenge for AGI organizations, and a challenge for readers
RobBensinger
19d
13
81
AI Safety Seems Hard to Measure
Holden Karnofsky
9d
2
33
How would you estimate the value of delaying AGI by 1 day, in marginal GiveWell donations?
AnonymousAccount
4d
19
28
How important are accurate AI timelines for the optimal spending schedule on AI risk interventions?
Tristan Cook
4d
0
88
Thoughts on AGI organizations and capabilities work
RobBensinger
13d
7
57
Main paths to impact in EU AI Policy
JOMG_Monnet
12d
1
12
What we owe the microbiome
TeddyW
3d
0
26
Existential AI Safety is NOT separate from near-term applications
stecas
7d
9