Go Back
Choose this branch
You can't go any further
meritocratic
regular
democratic
hot
top
alive
34 posts
Cause Prioritization
Risks of Astronomical Suffering (S-risks)
80,000 Hours
Crucial Considerations
7 posts
Center on Long-Term Risk (CLR)
17
Should you refrain from having children because of the risk posed by artificial intelligence?
Mientras
3mo
28
60
New book on s-risks
Tobias_Baumann
1mo
1
3
How likely do you think worse-than-extinction type fates to be?
span1
4mo
3
27
[Link]: 80,000 hours blog
Larks
10y
11
33
S-risks: Why they are the worst existential risks, and how to prevent them
Kaj_Sotala
5y
106
26
Should you work at 80,000 Hours?
Jess_Whittlestone
9y
11
12
Overview of Rethink Priorities’ work on risks from nuclear weapons
MichaelA
1y
0
9
Reducing Risks of Astronomical Suffering (S-Risks): A Neglected Global Priority
ignoranceprior
6y
4
58
Giving What We Can, 80,000 Hours, and Meta-Charity
wdmacaskill
10y
185
42
Efficient Charity
multifoliaterose
12y
185
29
What are the reasons to *not* consider reducing AI-Xrisk the highest priority cause?
David Scott Krueger (formerly: capybaralet)
3y
27
15
‘Crucial Considerations and Wise Philanthropy’, by Nick Bostrom
casebash
5y
3
38
On characterizing heavy-tailedness
Jsevillamol
2y
6
7
Mini map of s-risks
turchin
5y
34
54
CLR's recent work on multi-agent systems
JesseClifton
1y
1
27
Sections 5 & 6: Contemporary Architectures, Humans in the Loop
JesseClifton
3y
4
19
Sections 3 & 4: Credibility, Peaceful Bargaining Mechanisms
JesseClifton
3y
2
59
Preface to CLR's Research Agenda on Cooperation, Conflict, and TAI
JesseClifton
3y
10
34
Sections 1 & 2: Introduction, Strategy and Governance
JesseClifton
3y
5
6
Multiverse-wide Cooperation via Correlated Decision Making
Kaj_Sotala
5y
2
14
Section 7: Foundations of Rational Agency
JesseClifton
2y
4