Go Back
Choose this branch
You can't go any further
meritocratic
regular
democratic
hot
top
alive
94 posts
Suffering
Animal Welfare
Risks of Astronomical Suffering (S-risks)
Cause Prioritization
Center on Long-Term Risk (CLR)
80,000 Hours
Crucial Considerations
Veg*nism
Ethical Offsets
29 posts
Research Agendas
127
Demand offsetting
paulfchristiano
1y
38
21
Peter Singer's first published piece on AI
Fai
5mo
5
11
Should you refrain from having children because of the risk posed by artificial intelligence?
Mientras
3mo
28
56
Prioritization Research for Advancing Wisdom and Intelligence
ozziegooen
1y
8
6
A Longtermist case against Veganism
Connor Tabarrok
2mo
2
11
EA, Veganism and Negative Animal Utilitarianism
Yair Halberstadt
3mo
12
35
Some thoughts on vegetarianism and veganism
Richard_Ngo
10mo
25
39
Quick general thoughts on suffering and consciousness
Rob Bensinger
1y
42
71
Against Dog Ownership
Ben Pace
2y
27
105
Wirehead your Chickens
shminux
4y
53
111
"Just Suffer Until It Passes"
lionhearted
4y
26
9
Moral Weights of Six Animals, Considering Viewpoint Uncertainty - Seeds of Science call for reviewers
rogersbacon
6mo
2
17
Tears Must Flow
sapphire
1y
27
31
CLR's recent work on multi-agent systems
JesseClifton
1y
1
34
My AGI safety research—2022 review, ’23 plans
Steven Byrnes
6d
6
216
On how various plans miss the hard bits of the alignment challenge
So8res
5mo
81
146
Some conceptual alignment research projects
Richard_Ngo
3mo
14
9
Distilled Representations Research Agenda
Hoagy
2mo
2
29
Eliciting Latent Knowledge (ELK) - Distillation/Summary
Marius Hobbhahn
6mo
2
76
Testing The Natural Abstraction Hypothesis: Project Update
johnswentworth
1y
17
117
Our take on CHAI’s research agenda in under 1500 words
Alex Flint
2y
19
33
New year, new research agenda post
Charlie Steiner
11mo
4
44
Immobile AI makes a move: anti-wireheading, ontology change, and model splintering
Stuart_Armstrong
1y
3
129
Embedded Agents
abramdemski
4y
41
79
Research Agenda v0.9: Synthesising a human's preferences into a utility function
Stuart_Armstrong
3y
25
96
Announcement: AI alignment prize round 3 winners and next round
cousin_it
4y
7
19
AI, learn to be conservative, then learn to be less so: reducing side-effects, learning preserved features, and going beyond conservatism
Stuart_Armstrong
1y
4
53
AI Alignment Research Overview (by Jacob Steinhardt)
Ben Pace
3y
0