Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
1654 posts
Farmed animal welfare
Cause candidates
Policy
Wild animal welfare
Rethink Priorities
Less-discussed causes
Improving institutional decision-making
Surveys
Video
Data (EA Community)
History
Nuclear warfare
1194 posts
AI risk
AI alignment
Existential risk
Biosecurity
AI governance
COVID-19 pandemic
AI safety
AI forecasting
Artificial intelligence
Transformative artificial intelligence
Information hazard
History of effective altruism
377
Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg
HaydnBelfield
7mo
37
305
Some potential lessons from Carrick’s Congressional bid
Daniel_Eth
7mo
99
302
Major UN report discusses existential risk and future generations (summary)
finm
1y
5
297
What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)?
Luisa_Rodriguez
1y
37
289
EAF’s ballot initiative doubled Zurich’s development aid
Jonas Vollmer
2y
26
283
Reducing long-term risks from malevolent actors
David_Althaus
2y
85
280
The best $5,800 I’ve ever donated (to pandemic prevention).
ASB
10mo
86
279
The Welfare Range Table
Bob Fischer
1mo
14
278
Why Neuron Counts Shouldn't Be Used as Proxies for Moral Weight
Adam Shriver
22d
45
274
Samotsvety Nuclear Risk update October 2022
NunoSempere
2mo
52
267
Problem areas beyond 80,000 Hours' current priorities
Ardenlk
2y
64
258
Announcing EA Survey 2022
David_Moss
1mo
34
255
Case for emergency response teams
Gavin
8mo
47
245
Should we buy coal mines?
John G. Halstead
7mo
31
381
Some observations from an EA-adjacent (?) charitable effort
patio11
11d
8
374
A Letter to the Bulletin of Atomic Scientists
John G. Halstead
27d
56
362
Announcing Alvea—An EA COVID Vaccine Project
kyle_fish
10mo
25
359
Concrete Biosecurity Projects (some of which could be big)
ASB
11mo
72
326
Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest
Jason Schukraft
29d
18
319
Overreacting to current events can be very costly
Kelsey Piper
2mo
71
261
Why EAs are skeptical about AI Safety
Lukas Trötzmüller
5mo
31
236
Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
Ajeya
5mo
12
234
On Deference and Yudkowsky's AI Risk Estimates
Ben Garfinkel
6mo
188
220
Most* small probabilities aren't pascalian
Gregory Lewis
4mo
20
220
My Most Likely Reason to Die Young is AI X-Risk
AISafetyIsNotLongtermist
5mo
62
210
Reasons I’ve been hesitant about high levels of near-ish AI risk
elifland
5mo
16
204
Information security careers for GCR reduction
ClaireZabel
3y
34
202
Experimental longtermism: theory needs data
Jan_Kulveit
9mo
10