Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
1654 posts
Farmed animal welfare
Cause candidates
Policy
Wild animal welfare
Rethink Priorities
Less-discussed causes
Improving institutional decision-making
Surveys
Video
Data (EA Community)
History
Nuclear warfare
1194 posts
AI risk
AI alignment
Existential risk
Biosecurity
AI governance
COVID-19 pandemic
AI safety
AI forecasting
Artificial intelligence
Transformative artificial intelligence
Information hazard
History of effective altruism
441
Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg
HaydnBelfield
7mo
37
353
The Welfare Range Table
Bob Fischer
1mo
14
331
Reducing long-term risks from malevolent actors
David_Althaus
2y
85
321
EAF’s ballot initiative doubled Zurich’s development aid
Jonas Vollmer
2y
26
320
Major UN report discusses existential risk and future generations (summary)
finm
1y
5
316
Growing the US tofu market - a roadmap
George Stiffman
2mo
35
293
Big List of Cause Candidates
NunoSempere
1y
69
288
The best $5,800 I’ve ever donated (to pandemic prevention).
ASB
10mo
86
285
Some potential lessons from Carrick’s Congressional bid
Daniel_Eth
7mo
99
279
Most students who would agree with EA ideas haven't heard of EA yet (results of a large-scale survey)
Lucius Caviola
7mo
32
279
Problem areas beyond 80,000 Hours' current priorities
Ardenlk
2y
64
264
Why Neuron Counts Shouldn't Be Used as Proxies for Moral Weight
Adam Shriver
22d
45
261
What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)?
Luisa_Rodriguez
1y
37
250
Samotsvety Nuclear Risk update October 2022
NunoSempere
2mo
52
390
Announcing Alvea—An EA COVID Vaccine Project
kyle_fish
10mo
25
389
Some observations from an EA-adjacent (?) charitable effort
patio11
11d
8
385
Concrete Biosecurity Projects (some of which could be big)
ASB
11mo
72
308
A Letter to the Bulletin of Atomic Scientists
John G. Halstead
27d
56
300
How to pursue a career in technical AI alignment
CharlieRS
6mo
7
295
Why EAs are skeptical about AI Safety
Lukas Trötzmüller
5mo
31
280
On Deference and Yudkowsky's AI Risk Estimates
Ben Garfinkel
6mo
188
244
Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest
Jason Schukraft
29d
18
243
AI Governance: Opportunity and Theory of Impact
Allan Dafoe
2y
16
241
Overreacting to current events can be very costly
Kelsey Piper
2mo
71
232
My Most Likely Reason to Die Young is AI X-Risk
AISafetyIsNotLongtermist
5mo
62
217
Announcing the Nucleic Acid Observatory project for early detection of catastrophic biothreats
Will Bradshaw
7mo
3
214
How I failed to form views on AI safety
Ada-Maaria Hyvärinen
8mo
72
206
List of Lists of Concrete Biosecurity Project Ideas
Tessa
5mo
5