Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
45 posts
AI Impacts
Machine Intelligence Research Institute
Berkeley Existential Risk Initiative
OpenAI
Survival and Flourishing
Global Catastrophic Risk Institute
Center for Human-Compatible Artificial Intelligence
DeepMind
Human Compatible
Stuart Russell
Jaan Tallinn
Leverhulme Center for the Future of Intelligence
28 posts
Nonlinear Fund
Ought
AI interpretability
Redwood Research
Anthropic
Superintelligence
AI Alignment Forum
Instrumental convergence thesis
Malignant AI failure mode
165
2021 AI Alignment Literature Review and Charity Comparison
Larks
12mo
18
150
2020 AI Alignment Literature Review and Charity Comparison
Larks
1y
16
126
Did OpenPhil ever publish their in-depth review of their three-year OpenAI grant?
Markus Amalthea Magnuson
5mo
2
58
Primates vs birds: Is one brain architecture better than the other?
AI Impacts
3y
2
55
AI Impacts: Historic trends in technological progress
Aaron Gertler
2y
5
54
DeepMind is hiring Long-term Strategy & Governance researchers
vishal
1y
1
53
The Slippery Slope from DALLE-2 to Deepfake Anarchy
stecas
1mo
11
51
Common misconceptions about OpenAI
Jacob_Hilton
3mo
2
48
DeepMind: Generally capable agents emerge from open-ended play
kokotajlod
1y
10
40
Publication of Stuart Russell’s new book on AI safety - reviews needed
CaroJ
3y
8
35
Visible Thoughts Project and Bounty Announcement
So8res
1y
2
32
Summary of Stuart Russell's new book, "Human Compatible"
Rohin Shah
3y
1
29
Survival and Flourishing Fund grant applications open until October 4th ($1MM-$2MM planned for dispersal)
Habryka
3y
5
27
Cortés, Pizarro, and Afonso as Precedents for Takeover
AI Impacts
2y
17
182
Listen to more EA content with The Nonlinear Library
Kat Woods
1y
89
170
EA needs a hiring agency and Nonlinear will fund you to start one
Kat Woods
11mo
12
109
Apply to the second ML for Alignment Bootcamp (MLAB 2) in Berkeley [Aug 15 - Fri Sept 2]
Buck
7mo
7
104
We're Redwood Research, we do applied alignment research, AMA
Buck
1y
49
88
ARC is hiring alignment theory researchers
Paul_Christiano
1y
3
75
Redwood Research is hiring for several roles
Jack R
1y
0
59
I’ll pay you a $1,000 bounty for coming up with a good bounty (x-risk related)
Emerson Spartz
1y
48
52
Ought: why it matters and ways to help
Paul_Christiano
3y
5
50
A Barebones Guide to Mechanistic Interpretability Prerequisites
Neel Nanda
21d
1
47
Join the interpretability research hackathon
Esben Kran
1mo
0
43
Ought's theory of change
stuhlmueller
8mo
4
41
AMA: Ought
stuhlmueller
4mo
52
25
[Link] "Progress Update October 2019" (Ought)
Milan_Griffes
3y
1
22
The limited upside of interpretability
Peter S. Park
1mo
3