Go Back
You can't go any further
Choose this branch
meritocratic
regular
democratic
hot
top
alive
8 posts
Risks of Astronomical Suffering (S-risks)
26 posts
Cause Prioritization
80,000 Hours
Crucial Considerations
60
New book on s-risks
Tobias_Baumann
1mo
1
17
Should you refrain from having children because of the risk posed by artificial intelligence?
Mientras
3mo
28
13
Paperclippers, s-risks, hope
superads91
10mo
17
3
How likely do you think worse-than-extinction type fates to be?
span1
4mo
3
33
S-risks: Why they are the worst existential risks, and how to prevent them
Kaj_Sotala
5y
106
6
Outcome Terminology?
Dach
2y
0
9
Reducing Risks of Astronomical Suffering (S-Risks): A Neglected Global Priority
ignoranceprior
6y
4
7
Mini map of s-risks
turchin
5y
34
48
Prioritization Research for Advancing Wisdom and Intelligence
ozziegooen
1y
8
38
On characterizing heavy-tailedness
Jsevillamol
2y
6
12
Overview of Rethink Priorities’ work on risks from nuclear weapons
MichaelA
1y
0
41
Ben Hoffman's donor recommendations
Rob Bensinger
4y
19
29
What are the reasons to *not* consider reducing AI-Xrisk the highest priority cause?
David Scott Krueger (formerly: capybaralet)
3y
27
60
Why CFAR's Mission?
AnnaSalamon
6y
57
33
Cause Awareness as a Factor against Cause Neutrality
Darmani
4y
3
46
80,000 Hours: EA and Highly Political Causes
The_Jaded_One
5y
25
23
A case for strategy research: what it is and why we need more of it
Siebe
3y
19
56
Robustness of Cost-Effectiveness Estimates and Philanthropy
JonahS
9y
37
58
Giving What We Can, 80,000 Hours, and Meta-Charity
wdmacaskill
10y
185
42
Efficient Charity
multifoliaterose
12y
185
34
Responses to questions on donating to 80k, GWWC, EAA and LYCS
wdmacaskill
10y
20
7
FLI Podcast: The Precipice: Existential Risk and the Future of Humanity with Toby Ord
Palus Astra
2y
1