Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
473 posts
Existential risk
Biosecurity
COVID-19 pandemic
History of effective altruism
Vaccines
Global catastrophic biological risk
Information hazard
Pandemic preparedness
The Precipice
Atomically precise manufacturing
Climate engineering
Research agendas, questions, and project lists
721 posts
AI risk
AI alignment
AI governance
AI safety
AI forecasting
Artificial intelligence
European Union
Transformative artificial intelligence
Information security
Standards and regulation
Eliezer Yudkowsky
Paul Christiano
41
Existential risk mitigation: What I worry about when there are only bad options
MMMaas
1d
2
69
Beyond Simple Existential Risk: Survival in a Complex Interconnected World
Gideon Futerman
29d
65
385
Some observations from an EA-adjacent (?) charitable effort
patio11
11d
8
120
Database of existential risk estimates
MichaelA
2y
37
341
A Letter to the Bulletin of Atomic Scientists
John G. Halstead
27d
56
98
EA's Achievements in 2022
ElliotJDavies
6d
7
139
What Rethink Priorities General Longtermism Team Did in 2022, and Updates in Light of the Current Situation
Linch
6d
8
46
Announcing ERA: a spin-off from CERI
Nandini Shiralkar
7d
7
280
Overreacting to current events can be very costly
Kelsey Piper
2mo
71
22
COVID-19 in rural Balochistan, Pakistan: Two interviews from May 2020
NunoSempere
4d
2
151
Map of Biosecurity Interventions
James Lin
1mo
29
48
A Potential Cheap and High Impact Way to Reduce Covid in the UK this Winter
Lawrence Newport
1mo
16
16
Pandemic Preparedness: Stakeholders who influence politicians in the UK
Alexandra Malikova
12d
1
43
Longtermists Should Work on AI - There is No "AI Neutral" Scenario
simeon_c
4mo
62
66
AGI Timelines in Governance: Different Strategies for Different Timeframes
simeon_c
1d
17
49
High-level hopes for AI alignment
Holden Karnofsky
21h
1
-3
AGI Isn’t Close - Future Fund Worldview Prize
Toni MUENDEL
2d
14
28
How would you estimate the value of delaying AGI by 1 day, in marginal GiveWell donations?
AnonymousAccount
4d
19
285
Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest
Jason Schukraft
29d
18
49
We should say more than “x-risk is high”
OllieBase
4d
6
69
All AGI Safety questions welcome (especially basic ones) [~monthly thread]
robertskmiles
1mo
94
28
Existential AI Safety is NOT separate from near-term applications
stecas
7d
9
30
Have your timelines changed as a result of ChatGPT?
Chris Leong
15d
18
38
The ‘Old AI’: Lessons for AI governance from early electricity regulation
Sam Clarke
1d
1
96
‘Dissolving’ AI Risk – Parameter Uncertainty in AI Future Forecasting
Froolow
2mo
63
55
Two contrasting models of “intelligence” and future growth
Magnus Vinding
26d
17
51
Race to the Top: Benchmarks for AI Safety
isaduan
16d
8
26
Questions about AI that bother me
Eleni_A
7d
5