Go Back
Choose this branch
You can't go any further
meritocratic
regular
democratic
hot
top
alive
34 posts
Cause Prioritization
Risks of Astronomical Suffering (S-risks)
80,000 Hours
Crucial Considerations
7 posts
Center on Long-Term Risk (CLR)
11
Should you refrain from having children because of the risk posed by artificial intelligence?
Mientras
3mo
28
-4
New book on s-risks
Tobias_Baumann
1mo
1
0
How likely do you think worse-than-extinction type fates to be?
span1
4mo
3
33
[Link]: 80,000 hours blog
Larks
10y
11
22
S-risks: Why they are the worst existential risks, and how to prevent them
Kaj_Sotala
5y
106
34
Should you work at 80,000 Hours?
Jess_Whittlestone
9y
11
15
Overview of Rethink Priorities’ work on risks from nuclear weapons
MichaelA
1y
0
10
Reducing Risks of Astronomical Suffering (S-Risks): A Neglected Global Priority
ignoranceprior
6y
4
71
Giving What We Can, 80,000 Hours, and Meta-Charity
wdmacaskill
10y
185
51
Efficient Charity
multifoliaterose
12y
185
32
What are the reasons to *not* consider reducing AI-Xrisk the highest priority cause?
David Scott Krueger (formerly: capybaralet)
3y
27
20
‘Crucial Considerations and Wise Philanthropy’, by Nick Bostrom
casebash
5y
3
34
On characterizing heavy-tailedness
Jsevillamol
2y
6
6
Mini map of s-risks
turchin
5y
34
31
CLR's recent work on multi-agent systems
JesseClifton
1y
1
24
Sections 5 & 6: Contemporary Architectures, Humans in the Loop
JesseClifton
3y
4
14
Sections 3 & 4: Credibility, Peaceful Bargaining Mechanisms
JesseClifton
3y
2
37
Preface to CLR's Research Agenda on Cooperation, Conflict, and TAI
JesseClifton
3y
10
24
Sections 1 & 2: Introduction, Strategy and Governance
JesseClifton
3y
5
8
Multiverse-wide Cooperation via Correlated Decision Making
Kaj_Sotala
5y
2
11
Section 7: Foundations of Rational Agency
JesseClifton
2y
4