Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
1654 posts
Farmed animal welfare
Cause candidates
Policy
Wild animal welfare
Rethink Priorities
Less-discussed causes
Improving institutional decision-making
Surveys
Video
Data (EA Community)
History
Nuclear warfare
1194 posts
AI risk
AI alignment
Existential risk
Biosecurity
AI governance
COVID-19 pandemic
AI safety
AI forecasting
Artificial intelligence
Transformative artificial intelligence
Information hazard
History of effective altruism
30
List of cause areas that EA should potentially prioritise more
freedomandutility
3d
11
18
Veganism, Optimal Health, and Intellectual Honesty
Michael_2358
1d
18
176
Announcing WildAnimalSuffering.org, a new resource launched for the cause
David van Beveren
8d
27
123
Working with the Beef Industry for Chicken Welfare
RobertY
2d
14
240
Announcing EA Survey 2022
David_Moss
1mo
34
100
Kurzgesagt's most recent video promoting the introducing of wild life to other planets is unethical and irresponsible
David van Beveren
9d
33
53
The deathprint of replacing beef by chicken and insect meat
Stijn
20d
17
128
Getting money out of politics and into charity
Eric Neyman
2y
48
70
Octopuses (Probably) Don't Have Nine Minds
Bob Fischer
8d
19
264
Why Neuron Counts Shouldn't Be Used as Proxies for Moral Weight
Adam Shriver
22d
45
6
[linkpost] Is China Planning to Attack Taiwan? A Careful Consideration of Available Evidence Says No.
Jack Cunningham
5d
4
122
Banding Together to Ban Octopus Farming
Tessa @ ALI
20d
10
14
Doing good as a monkey
tobyj
2d
1
353
The Welfare Range Table
Bob Fischer
1mo
14
67
AGI Timelines in Governance: Different Strategies for Different Timeframes
simeon_c
1d
17
34
High-level hopes for AI alignment
Holden Karnofsky
21h
1
32
Existential risk mitigation: What I worry about when there are only bad options
MMMaas
1d
2
-3
AGI Isn’t Close - Future Fund Worldview Prize
Toni MUENDEL
2d
14
83
Beyond Simple Existential Risk: Survival in a Complex Interconnected World
Gideon Futerman
29d
65
389
Some observations from an EA-adjacent (?) charitable effort
patio11
11d
8
23
How would you estimate the value of delaying AGI by 1 day, in marginal GiveWell donations?
AnonymousAccount
4d
19
244
Pre-Announcing the 2023 Open Philanthropy AI Worldviews Contest
Jason Schukraft
29d
18
120
Database of existential risk estimates
MichaelA
2y
37
38
We should say more than “x-risk is high”
OllieBase
4d
6
308
A Letter to the Bulletin of Atomic Scientists
John G. Halstead
27d
56
56
All AGI Safety questions welcome (especially basic ones) [~monthly thread]
robertskmiles
1mo
94
85
EA's Achievements in 2022
ElliotJDavies
6d
7
30
Existential AI Safety is NOT separate from near-term applications
stecas
7d
9