Go Back
Choose this branch
You can't go any further
meritocratic
regular
democratic
hot
top
alive
94 posts
Suffering
Animal Welfare
Risks of Astronomical Suffering (S-risks)
Cause Prioritization
Center on Long-Term Risk (CLR)
80,000 Hours
Crucial Considerations
Veg*nism
Ethical Offsets
29 posts
Research Agendas
17
Should you refrain from having children because of the risk posed by artificial intelligence?
Mientras
3mo
28
9
EA, Veganism and Negative Animal Utilitarianism
Yair Halberstadt
3mo
12
-2
A Longtermist case against Veganism
Connor Tabarrok
2mo
2
2
Some thoughts on Animals
nitinkhanna
5mo
6
20
Peter Singer's first published piece on AI
Fai
5mo
5
2
Vegetarianism and depression
Maggy
2mo
2
60
New book on s-risks
Tobias_Baumann
1mo
1
3
How likely do you think worse-than-extinction type fates to be?
span1
4mo
3
54
CLR's recent work on multi-agent systems
JesseClifton
1y
1
33
Some thoughts on vegetarianism and veganism
Richard_Ngo
10mo
25
27
[Link]: 80,000 hours blog
Larks
10y
11
33
S-risks: Why they are the worst existential risks, and how to prevent them
Kaj_Sotala
5y
106
10
A Simple Two-Axis Model of Subjective States, with Possible Applications to Utilitarian Problems
moridinamael
4y
58
27
Sections 5 & 6: Contemporary Architectures, Humans in the Loop
JesseClifton
3y
4
34
My AGI safety research—2022 review, ’23 plans
Steven Byrnes
6d
6
258
On how various plans miss the hard bits of the alignment challenge
So8res
5mo
81
168
Some conceptual alignment research projects
Richard_Ngo
3mo
14
15
Distilled Representations Research Agenda
Hoagy
2mo
2
-12
All life's helpers' beliefs
Tehdastehdas
1mo
1
83
Testing The Natural Abstraction Hypothesis: Project Update
johnswentworth
1y
17
45
Resources for AI Alignment Cartography
Gyrodiot
2y
8
43
Technical AGI safety research outside AI
Richard_Ngo
3y
3
76
The Learning-Theoretic AI Alignment Research Agenda
Vanessa Kosoy
4y
39
43
AI Alignment Research Overview (by Jacob Steinhardt)
Ben Pace
3y
0
67
Research Agenda v0.9: Synthesising a human's preferences into a utility function
Stuart_Armstrong
3y
25
34
New safety research agenda: scalable agent alignment via reward modeling
Vika
4y
13
34
Research Agenda in reverse: what *would* a solution look like?
Stuart_Armstrong
3y
25
6
Acknowledgements & References
JesseClifton
3y
0