Go Back
Choose this branch
You can't go any further
meritocratic
regular
democratic
hot
top
alive
34 posts
Cause Prioritization
Risks of Astronomical Suffering (S-risks)
80,000 Hours
Crucial Considerations
7 posts
Center on Long-Term Risk (CLR)
23
Should you refrain from having children because of the risk posed by artificial intelligence?
Mientras
3mo
28
124
New book on s-risks
Tobias_Baumann
1mo
1
6
How likely do you think worse-than-extinction type fates to be?
span1
4mo
3
21
[Link]: 80,000 hours blog
Larks
10y
11
44
S-risks: Why they are the worst existential risks, and how to prevent them
Kaj_Sotala
5y
106
18
Should you work at 80,000 Hours?
Jess_Whittlestone
9y
11
9
Overview of Rethink Priorities’ work on risks from nuclear weapons
MichaelA
1y
0
8
Reducing Risks of Astronomical Suffering (S-Risks): A Neglected Global Priority
ignoranceprior
6y
4
45
Giving What We Can, 80,000 Hours, and Meta-Charity
wdmacaskill
10y
185
33
Efficient Charity
multifoliaterose
12y
185
26
What are the reasons to *not* consider reducing AI-Xrisk the highest priority cause?
David Scott Krueger (formerly: capybaralet)
3y
27
10
‘Crucial Considerations and Wise Philanthropy’, by Nick Bostrom
casebash
5y
3
42
On characterizing heavy-tailedness
Jsevillamol
2y
6
8
Mini map of s-risks
turchin
5y
34
77
CLR's recent work on multi-agent systems
JesseClifton
1y
1
30
Sections 5 & 6: Contemporary Architectures, Humans in the Loop
JesseClifton
3y
4
24
Sections 3 & 4: Credibility, Peaceful Bargaining Mechanisms
JesseClifton
3y
2
81
Preface to CLR's Research Agenda on Cooperation, Conflict, and TAI
JesseClifton
3y
10
44
Sections 1 & 2: Introduction, Strategy and Governance
JesseClifton
3y
5
4
Multiverse-wide Cooperation via Correlated Decision Making
Kaj_Sotala
5y
2
17
Section 7: Foundations of Rational Agency
JesseClifton
2y
4