Go Back
Choose this branch
You can't go any further
meritocratic
regular
democratic
hot
top
alive
94 posts
Suffering
Animal Welfare
Risks of Astronomical Suffering (S-risks)
Cause Prioritization
Center on Long-Term Risk (CLR)
80,000 Hours
Crucial Considerations
Veg*nism
Ethical Offsets
29 posts
Research Agendas
11
Should you refrain from having children because of the risk posed by artificial intelligence?
Mientras
3mo
28
11
EA, Veganism and Negative Animal Utilitarianism
Yair Halberstadt
3mo
12
6
A Longtermist case against Veganism
Connor Tabarrok
2mo
2
0
Some thoughts on Animals
nitinkhanna
5mo
6
21
Peter Singer's first published piece on AI
Fai
5mo
5
0
Vegetarianism and depression
Maggy
2mo
2
-4
New book on s-risks
Tobias_Baumann
1mo
1
0
How likely do you think worse-than-extinction type fates to be?
span1
4mo
3
31
CLR's recent work on multi-agent systems
JesseClifton
1y
1
35
Some thoughts on vegetarianism and veganism
Richard_Ngo
10mo
25
33
[Link]: 80,000 hours blog
Larks
10y
11
22
S-risks: Why they are the worst existential risks, and how to prevent them
Kaj_Sotala
5y
106
12
A Simple Two-Axis Model of Subjective States, with Possible Applications to Utilitarian Problems
moridinamael
4y
58
24
Sections 5 & 6: Contemporary Architectures, Humans in the Loop
JesseClifton
3y
4
34
My AGI safety research—2022 review, ’23 plans
Steven Byrnes
6d
6
216
On how various plans miss the hard bits of the alignment challenge
So8res
5mo
81
146
Some conceptual alignment research projects
Richard_Ngo
3mo
14
9
Distilled Representations Research Agenda
Hoagy
2mo
2
-14
All life's helpers' beliefs
Tehdastehdas
1mo
1
76
Testing The Natural Abstraction Hypothesis: Project Update
johnswentworth
1y
17
35
Resources for AI Alignment Cartography
Gyrodiot
2y
8
42
Technical AGI safety research outside AI
Richard_Ngo
3y
3
72
The Learning-Theoretic AI Alignment Research Agenda
Vanessa Kosoy
4y
39
53
AI Alignment Research Overview (by Jacob Steinhardt)
Ben Pace
3y
0
79
Research Agenda v0.9: Synthesising a human's preferences into a utility function
Stuart_Armstrong
3y
25
41
New safety research agenda: scalable agent alignment via reward modeling
Vika
4y
13
43
Research Agenda in reverse: what *would* a solution look like?
Stuart_Armstrong
3y
25
4
Acknowledgements & References
JesseClifton
3y
0