Go Back
You can't go any further
Choose this branch
meritocratic
regular
democratic
hot
top
alive
8 posts
Risks of Astronomical Suffering (S-risks)
26 posts
Cause Prioritization
80,000 Hours
Crucial Considerations
60
New book on s-risks
Tobias_Baumann
1mo
1
33
S-risks: Why they are the worst existential risks, and how to prevent them
Kaj_Sotala
5y
106
17
Should you refrain from having children because of the risk posed by artificial intelligence?
Mientras
3mo
28
13
Paperclippers, s-risks, hope
superads91
10mo
17
9
Reducing Risks of Astronomical Suffering (S-Risks): A Neglected Global Priority
ignoranceprior
6y
4
7
Mini map of s-risks
turchin
5y
34
6
Outcome Terminology?
Dach
2y
0
3
How likely do you think worse-than-extinction type fates to be?
span1
4mo
3
60
Why CFAR's Mission?
AnnaSalamon
6y
57
58
Giving What We Can, 80,000 Hours, and Meta-Charity
wdmacaskill
10y
185
56
Robustness of Cost-Effectiveness Estimates and Philanthropy
JonahS
9y
37
48
Prioritization Research for Advancing Wisdom and Intelligence
ozziegooen
1y
8
46
80,000 Hours: EA and Highly Political Causes
The_Jaded_One
5y
25
42
Efficient Charity
multifoliaterose
12y
185
41
Ben Hoffman's donor recommendations
Rob Bensinger
4y
19
38
On characterizing heavy-tailedness
Jsevillamol
2y
6
34
Defeating Mundane Holocausts With Robots
lsparrish
11y
28
34
Responses to questions on donating to 80k, GWWC, EAA and LYCS
wdmacaskill
10y
20
33
Cause Awareness as a Factor against Cause Neutrality
Darmani
4y
3
31
Maximizing Cost-effectiveness via Critical Inquiry
HoldenKarnofsky
11y
24
31
What To Do: Environmentalism vs Friendly AI (John Baez)
XiXiDu
11y
63
29
What are the reasons to *not* consider reducing AI-Xrisk the highest priority cause?
David Scott Krueger (formerly: capybaralet)
3y
27