Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
1654 posts
Farmed animal welfare
Cause candidates
Policy
Wild animal welfare
Rethink Priorities
Less-discussed causes
Improving institutional decision-making
Surveys
Video
Data (EA Community)
History
Nuclear warfare
1194 posts
AI risk
AI alignment
Existential risk
Biosecurity
AI governance
COVID-19 pandemic
AI safety
AI forecasting
Artificial intelligence
Transformative artificial intelligence
Information hazard
History of effective altruism
409
Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg
HaydnBelfield
7mo
37
316
The Welfare Range Table
Bob Fischer
1mo
14
311
Major UN report discusses existential risk and future generations (summary)
finm
1y
5
307
Reducing long-term risks from malevolent actors
David_Althaus
2y
85
305
EAF’s ballot initiative doubled Zurich’s development aid
Jonas Vollmer
2y
26
295
Some potential lessons from Carrick’s Congressional bid
Daniel_Eth
7mo
99
284
The best $5,800 I’ve ever donated (to pandemic prevention).
ASB
10mo
86
279
What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)?
Luisa_Rodriguez
1y
37
273
Problem areas beyond 80,000 Hours' current priorities
Ardenlk
2y
64
271
Why Neuron Counts Shouldn't Be Used as Proxies for Moral Weight
Adam Shriver
22d
45
265
Growing the US tofu market - a roadmap
George Stiffman
2mo
35
262
Samotsvety Nuclear Risk update October 2022
NunoSempere
2mo
52
259
Most students who would agree with EA ideas haven't heard of EA yet (results of a large-scale survey)
Lucius Caviola
7mo
32
249
Announcing EA Survey 2022
David_Moss
1mo
34
385
Some observations from an EA-adjacent (?) charitable effort
patio11
11d
8
376
Announcing Alvea—An EA COVID Vaccine Project
kyle_fish
10mo
25
372
Concrete Biosecurity Projects (some of which could be big)
ASB
11mo
72
341
A Letter to the Bulletin of Atomic Scientists
John G. Halstead
27d
56
285
Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest
Jason Schukraft
29d
18
280
Overreacting to current events can be very costly
Kelsey Piper
2mo
71
278
Why EAs are skeptical about AI Safety
Lukas Trötzmüller
5mo
31
257
On Deference and Yudkowsky's AI Risk Estimates
Ben Garfinkel
6mo
188
226
How to pursue a career in technical AI alignment
CharlieRS
6mo
7
226
My Most Likely Reason to Die Young is AI X-Risk
AISafetyIsNotLongtermist
5mo
62
220
AI Governance: Opportunity and Theory of Impact
Allan Dafoe
2y
16
215
Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
Ajeya
5mo
12
208
Announcing the Nucleic Acid Observatory project for early detection of catastrophic biothreats
Will Bradshaw
7mo
3
200
Reasons I’ve been hesitant about high levels of near-ish AI risk
elifland
5mo
16