Go Back
Choose this branch
You can't go any further
meritocratic
regular
democratic
hot
top
alive
94 posts
Suffering
Animal Welfare
Risks of Astronomical Suffering (S-risks)
Cause Prioritization
Center on Long-Term Risk (CLR)
80,000 Hours
Crucial Considerations
Veg*nism
Ethical Offsets
29 posts
Research Agendas
127
Demand offsetting
paulfchristiano
1y
38
111
"Just Suffer Until It Passes"
lionhearted
4y
26
105
Wirehead your Chickens
shminux
4y
53
80
Why CFAR's Mission?
AnnaSalamon
6y
57
73
Many of us *are* hit with a baseball once a month.
Alexandros
12y
31
71
Giving What We Can, 80,000 Hours, and Meta-Charity
wdmacaskill
10y
185
71
Against Dog Ownership
Ben Pace
2y
27
70
Why Eat Less Meat?
Peter Wildeford
9y
516
67
Robustness of Cost-Effectiveness Estimates and Philanthropy
JonahS
9y
37
64
Overcoming suffering: Emotional acceptance
Kaj_Sotala
11y
44
62
Mapping Fun Theory onto the challenges of ethical foie gras
HonoreDB
11y
62
61
80,000 Hours: EA and Highly Political Causes
The_Jaded_One
5y
25
56
Prioritization Research for Advancing Wisdom and Intelligence
ozziegooen
1y
8
53
Ben Hoffman's donor recommendations
Rob Bensinger
4y
19
216
On how various plans miss the hard bits of the alignment challenge
So8res
5mo
81
146
Some conceptual alignment research projects
Richard_Ngo
3mo
14
129
Embedded Agents
abramdemski
4y
41
117
Our take on CHAI’s research agenda in under 1500 words
Alex Flint
2y
19
96
Announcement: AI alignment prize round 3 winners and next round
cousin_it
4y
7
79
Research Agenda v0.9: Synthesising a human's preferences into a utility function
Stuart_Armstrong
3y
25
76
Testing The Natural Abstraction Hypothesis: Project Update
johnswentworth
1y
17
74
MIRI's technical research agenda
So8res
7y
52
72
The Learning-Theoretic AI Alignment Research Agenda
Vanessa Kosoy
4y
39
53
AI Alignment Research Overview (by Jacob Steinhardt)
Ben Pace
3y
0
52
Funding Good Research
lukeprog
10y
44
44
Immobile AI makes a move: anti-wireheading, ontology change, and model splintering
Stuart_Armstrong
1y
3
43
Research Agenda in reverse: what *would* a solution look like?
Stuart_Armstrong
3y
25
42
Technical AGI safety research outside AI
Richard_Ngo
3y
3