Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
473 posts
Existential risk
Biosecurity
COVID-19 pandemic
History of effective altruism
Vaccines
Global catastrophic biological risk
Information hazard
Pandemic preparedness
The Precipice
Atomically precise manufacturing
Climate engineering
Research agendas, questions, and project lists
721 posts
AI risk
AI alignment
AI governance
AI safety
AI forecasting
Artificial intelligence
European Union
Transformative artificial intelligence
Information security
Standards and regulation
Eliezer Yudkowsky
Paul Christiano
389
Some observations from an EA-adjacent (?) charitable effort
patio11
11d
8
32
Existential risk mitigation: What I worry about when there are only bad options
MMMaas
1d
2
113
What Rethink Priorities General Longtermism Team Did in 2022, and Updates in Light of the Current Situation
Linch
6d
8
85
EA's Achievements in 2022
ElliotJDavies
6d
7
31
Undesired dystopia: the parables of island rats and cut-throat capitalists
Will Aldred
3d
0
308
A Letter to the Bulletin of Atomic Scientists
John G. Halstead
27d
56
25
Sir Gavin and the green sky
Gavin
3d
0
44
Announcing ERA: a spin-off from CERI
Nandini Shiralkar
7d
7
195
Stop Thinking about FTX. Think About Getting Zika Instead.
jeberts
1mo
5
140
Review: What We Owe The Future
Kelsey Piper
29d
3
144
Delay, Detect, Defend: Preparing for a Future in which Thousands Can Release New Pandemics by Kevin Esvelt
Jeremy
1mo
7
76
Come get malaria with me?
jeberts
21d
4
11
COVID-19 in rural Balochistan, Pakistan: Two interviews from May 2020
NunoSempere
4d
2
34
Pandemic prevention as fire-fighting by Richard Williamson (Alvea.bio) for Works in Progress
Nick Whitaker
12d
1
67
AGI Timelines in Governance: Different Strategies for Different Timeframes
simeon_c
1d
17
34
High-level hopes for AI alignment
Holden Karnofsky
21h
1
36
The ‘Old AI’: Lessons for AI governance from early electricity regulation
Sam Clarke
1d
1
38
We should say more than “x-risk is high”
OllieBase
4d
6
33
Concrete actionable policies relevant to AI safety (written 2019)
weeatquince
4d
0
32
How important are accurate AI timelines for the optimal spending schedule on AI risk interventions?
Tristan Cook
4d
0
75
AI Safety Seems Hard to Measure
Holden Karnofsky
9d
2
244
Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest
Jason Schukraft
29d
18
144
A challenge for AGI organizations, and a challenge for readers
RobBensinger
19d
13
16
What we owe the microbiome
TeddyW
3d
0
23
How would you estimate the value of delaying AGI by 1 day, in marginal GiveWell donations?
AnonymousAccount
4d
19
63
Main paths to impact in EU AI Policy
JOMG_Monnet
12d
1
66
Thoughts on AGI organizations and capabilities work
RobBensinger
13d
7
52
ChatGPT interviewed on TV
Will Aldred
11d
1