Go Back
Choose this branch
You can't go any further
meritocratic
regular
democratic
hot
top
alive
94 posts
Suffering
Animal Welfare
Risks of Astronomical Suffering (S-risks)
Cause Prioritization
Center on Long-Term Risk (CLR)
80,000 Hours
Crucial Considerations
Veg*nism
Ethical Offsets
29 posts
Research Agendas
124
New book on s-risks
Tobias_Baumann
1mo
1
23
Should you refrain from having children because of the risk posed by artificial intelligence?
Mientras
3mo
28
135
Demand offsetting
paulfchristiano
1y
38
19
Peter Singer's first published piece on AI
Fai
5mo
5
77
CLR's recent work on multi-agent systems
JesseClifton
1y
1
31
Some thoughts on vegetarianism and veganism
Richard_Ngo
10mo
25
40
Prioritization Research for Advancing Wisdom and Intelligence
ozziegooen
1y
8
25
Paperclippers, s-risks, hope
superads91
10mo
17
7
EA, Veganism and Negative Animal Utilitarianism
Yair Halberstadt
3mo
12
4
Vegetarianism and depression
Maggy
2mo
2
27
Quick general thoughts on suffering and consciousness
Rob Bensinger
1y
42
81
Preface to CLR's Research Agenda on Cooperation, Conflict, and TAI
JesseClifton
3y
10
6
How likely do you think worse-than-extinction type fates to be?
span1
4mo
3
21
Book Review: The Ethics of What We Eat
Jonah_O
1y
1
34
My AGI safety research—2022 review, ’23 plans
Steven Byrnes
6d
6
300
On how various plans miss the hard bits of the alignment challenge
So8res
5mo
81
190
Some conceptual alignment research projects
Richard_Ngo
3mo
14
21
Distilled Representations Research Agenda
Hoagy
2mo
2
69
Eliciting Latent Knowledge (ELK) - Distillation/Summary
Marius Hobbhahn
6mo
2
90
Testing The Natural Abstraction Hypothesis: Project Update
johnswentworth
1y
17
267
Embedded Agents
abramdemski
4y
41
107
Our take on CHAI’s research agenda in under 1500 words
Alex Flint
2y
19
25
New year, new research agenda post
Charlie Steiner
11mo
4
55
Resources for AI Alignment Cartography
Gyrodiot
2y
8
90
Announcement: AI alignment prize round 3 winners and next round
cousin_it
4y
7
20
Immobile AI makes a move: anti-wireheading, ontology change, and model splintering
Stuart_Armstrong
1y
3
80
The Learning-Theoretic AI Alignment Research Agenda
Vanessa Kosoy
4y
39
55
Research Agenda v0.9: Synthesising a human's preferences into a utility function
Stuart_Armstrong
3y
25