Go Back
Choose this branch
You can't go any further
meritocratic
regular
democratic
hot
top
alive
94 posts
Suffering
Animal Welfare
Risks of Astronomical Suffering (S-risks)
Cause Prioritization
Center on Long-Term Risk (CLR)
80,000 Hours
Crucial Considerations
Veg*nism
Ethical Offsets
29 posts
Research Agendas
135
Demand offsetting
paulfchristiano
1y
38
124
New book on s-risks
Tobias_Baumann
1mo
1
81
Preface to CLR's Research Agenda on Cooperation, Conflict, and TAI
JesseClifton
3y
10
77
CLR's recent work on multi-agent systems
JesseClifton
1y
1
65
"Just Suffer Until It Passes"
lionhearted
4y
26
59
Wirehead your Chickens
shminux
4y
53
58
Why Eat Less Meat?
Peter Wildeford
9y
516
45
Giving What We Can, 80,000 Hours, and Meta-Charity
wdmacaskill
10y
185
45
Robustness of Cost-Effectiveness Estimates and Philanthropy
JonahS
9y
37
44
S-risks: Why they are the worst existential risks, and how to prevent them
Kaj_Sotala
5y
106
44
Sections 1 & 2: Introduction, Strategy and Governance
JesseClifton
3y
5
42
On characterizing heavy-tailedness
Jsevillamol
2y
6
42
Arguments Against Speciesism
Lukas_Gloor
9y
479
41
Many of us *are* hit with a baseball once a month.
Alexandros
12y
31
300
On how various plans miss the hard bits of the alignment challenge
So8res
5mo
81
267
Embedded Agents
abramdemski
4y
41
190
Some conceptual alignment research projects
Richard_Ngo
3mo
14
107
Our take on CHAI’s research agenda in under 1500 words
Alex Flint
2y
19
90
Testing The Natural Abstraction Hypothesis: Project Update
johnswentworth
1y
17
90
Announcement: AI alignment prize round 3 winners and next round
cousin_it
4y
7
80
The Learning-Theoretic AI Alignment Research Agenda
Vanessa Kosoy
4y
39
69
Eliciting Latent Knowledge (ELK) - Distillation/Summary
Marius Hobbhahn
6mo
2
55
Resources for AI Alignment Cartography
Gyrodiot
2y
8
55
Research Agenda v0.9: Synthesising a human's preferences into a utility function
Stuart_Armstrong
3y
25
44
Technical AGI safety research outside AI
Richard_Ngo
3y
3
34
MIRI's technical research agenda
So8res
7y
52
34
My AGI safety research—2022 review, ’23 plans
Steven Byrnes
6d
6
33
AI Alignment Research Overview (by Jacob Steinhardt)
Ben Pace
3y
0