Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
1654 posts
Farmed animal welfare
Cause candidates
Policy
Wild animal welfare
Rethink Priorities
Less-discussed causes
Improving institutional decision-making
Surveys
Video
Data (EA Community)
History
Nuclear warfare
1194 posts
AI risk
AI alignment
Existential risk
Biosecurity
AI governance
COVID-19 pandemic
AI safety
AI forecasting
Artificial intelligence
Transformative artificial intelligence
Information hazard
History of effective altruism
20
List of cause areas that EA should potentially prioritise more
freedomandutility
3d
11
22
Veganism, Optimal Health, and Intellectual Honesty
Michael_2358
1d
18
172
Announcing WildAnimalSuffering.org, a new resource launched for the cause
David van Beveren
8d
27
143
Working with the Beef Industry for Chicken Welfare
RobertY
2d
14
258
Announcing EA Survey 2022
David_Moss
1mo
34
76
Kurzgesagt's most recent video promoting the introducing of wild life to other planets is unethical and irresponsible
David van Beveren
9d
33
67
The deathprint of replacing beef by chicken and insect meat
Stijn
20d
17
90
Getting money out of politics and into charity
Eric Neyman
2y
48
108
Octopuses (Probably) Don't Have Nine Minds
Bob Fischer
8d
19
278
Why Neuron Counts Shouldn't Be Used as Proxies for Moral Weight
Adam Shriver
22d
45
8
[linkpost] Is China Planning to Attack Taiwan? A Careful Consideration of Available Evidence Says No.
Jack Cunningham
5d
4
104
Banding Together to Ban Octopus Farming
Tessa @ ALI
20d
10
14
Doing good as a monkey
tobyj
2d
1
279
The Welfare Range Table
Bob Fischer
1mo
14
65
AGI Timelines in Governance: Different Strategies for Different Timeframes
simeon_c
1d
17
64
High-level hopes for AI alignment
Holden Karnofsky
21h
1
50
Existential risk mitigation: What I worry about when there are only bad options
MMMaas
1d
2
-3
AGI Isn’t Close - Future Fund Worldview Prize
Toni MUENDEL
2d
14
55
Beyond Simple Existential Risk: Survival in a Complex Interconnected World
Gideon Futerman
29d
65
381
Some observations from an EA-adjacent (?) charitable effort
patio11
11d
8
33
How would you estimate the value of delaying AGI by 1 day, in marginal GiveWell donations?
AnonymousAccount
4d
19
326
Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest
Jason Schukraft
29d
18
120
Database of existential risk estimates
MichaelA
2y
37
60
We should say more than “x-risk is high”
OllieBase
4d
6
374
A Letter to the Bulletin of Atomic Scientists
John G. Halstead
27d
56
82
All AGI Safety questions welcome (especially basic ones) [~monthly thread]
robertskmiles
1mo
94
111
EA's Achievements in 2022
ElliotJDavies
6d
7
26
Existential AI Safety is NOT separate from near-term applications
stecas
7d
9