Go Back
Choose this branch
You can't go any further
meritocratic
regular
democratic
hot
top
alive
94 posts
Suffering
Animal Welfare
Risks of Astronomical Suffering (S-risks)
Cause Prioritization
Center on Long-Term Risk (CLR)
80,000 Hours
Crucial Considerations
Veg*nism
Ethical Offsets
29 posts
Research Agendas
60
New book on s-risks
Tobias_Baumann
1mo
1
17
Should you refrain from having children because of the risk posed by artificial intelligence?
Mientras
3mo
28
131
Demand offsetting
paulfchristiano
1y
38
20
Peter Singer's first published piece on AI
Fai
5mo
5
48
Prioritization Research for Advancing Wisdom and Intelligence
ozziegooen
1y
8
33
Some thoughts on vegetarianism and veganism
Richard_Ngo
10mo
25
9
EA, Veganism and Negative Animal Utilitarianism
Yair Halberstadt
3mo
12
33
Quick general thoughts on suffering and consciousness
Rob Bensinger
1y
42
54
CLR's recent work on multi-agent systems
JesseClifton
1y
1
55
Against Dog Ownership
Ben Pace
2y
27
9
Moral Weights of Six Animals, Considering Viewpoint Uncertainty - Seeds of Science call for reviewers
rogersbacon
6mo
2
20
Book Review: The Ethics of What We Eat
Jonah_O
1y
1
59
Preface to CLR's Research Agenda on Cooperation, Conflict, and TAI
JesseClifton
3y
10
17
Tears Must Flow
sapphire
1y
27
34
My AGI safety research—2022 review, ’23 plans
Steven Byrnes
6d
6
258
On how various plans miss the hard bits of the alignment challenge
So8res
5mo
81
168
Some conceptual alignment research projects
Richard_Ngo
3mo
14
15
Distilled Representations Research Agenda
Hoagy
2mo
2
49
Eliciting Latent Knowledge (ELK) - Distillation/Summary
Marius Hobbhahn
6mo
2
83
Testing The Natural Abstraction Hypothesis: Project Update
johnswentworth
1y
17
112
Our take on CHAI’s research agenda in under 1500 words
Alex Flint
2y
19
198
Embedded Agents
abramdemski
4y
41
29
New year, new research agenda post
Charlie Steiner
11mo
4
32
Immobile AI makes a move: anti-wireheading, ontology change, and model splintering
Stuart_Armstrong
1y
3
93
Announcement: AI alignment prize round 3 winners and next round
cousin_it
4y
7
67
Research Agenda v0.9: Synthesising a human's preferences into a utility function
Stuart_Armstrong
3y
25
45
Resources for AI Alignment Cartography
Gyrodiot
2y
8
76
The Learning-Theoretic AI Alignment Research Agenda
Vanessa Kosoy
4y
39