Go Back
Choose this branch
You can't go any further
meritocratic
regular
democratic
hot
top
alive
94 posts
Suffering
Animal Welfare
Risks of Astronomical Suffering (S-risks)
Cause Prioritization
Center on Long-Term Risk (CLR)
80,000 Hours
Crucial Considerations
Veg*nism
Ethical Offsets
29 posts
Research Agendas
23
Should you refrain from having children because of the risk posed by artificial intelligence?
Mientras
3mo
28
7
EA, Veganism and Negative Animal Utilitarianism
Yair Halberstadt
3mo
12
-10
A Longtermist case against Veganism
Connor Tabarrok
2mo
2
4
Some thoughts on Animals
nitinkhanna
5mo
6
19
Peter Singer's first published piece on AI
Fai
5mo
5
4
Vegetarianism and depression
Maggy
2mo
2
124
New book on s-risks
Tobias_Baumann
1mo
1
6
How likely do you think worse-than-extinction type fates to be?
span1
4mo
3
77
CLR's recent work on multi-agent systems
JesseClifton
1y
1
31
Some thoughts on vegetarianism and veganism
Richard_Ngo
10mo
25
21
[Link]: 80,000 hours blog
Larks
10y
11
44
S-risks: Why they are the worst existential risks, and how to prevent them
Kaj_Sotala
5y
106
8
A Simple Two-Axis Model of Subjective States, with Possible Applications to Utilitarian Problems
moridinamael
4y
58
30
Sections 5 & 6: Contemporary Architectures, Humans in the Loop
JesseClifton
3y
4
34
My AGI safety research—2022 review, ’23 plans
Steven Byrnes
6d
6
300
On how various plans miss the hard bits of the alignment challenge
So8res
5mo
81
190
Some conceptual alignment research projects
Richard_Ngo
3mo
14
21
Distilled Representations Research Agenda
Hoagy
2mo
2
-10
All life's helpers' beliefs
Tehdastehdas
1mo
1
90
Testing The Natural Abstraction Hypothesis: Project Update
johnswentworth
1y
17
55
Resources for AI Alignment Cartography
Gyrodiot
2y
8
44
Technical AGI safety research outside AI
Richard_Ngo
3y
3
80
The Learning-Theoretic AI Alignment Research Agenda
Vanessa Kosoy
4y
39
33
AI Alignment Research Overview (by Jacob Steinhardt)
Ben Pace
3y
0
55
Research Agenda v0.9: Synthesising a human's preferences into a utility function
Stuart_Armstrong
3y
25
27
New safety research agenda: scalable agent alignment via reward modeling
Vika
4y
13
25
Research Agenda in reverse: what *would* a solution look like?
Stuart_Armstrong
3y
25
8
Acknowledgements & References
JesseClifton
3y
0