Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
45 posts
AI Impacts
Machine Intelligence Research Institute
Berkeley Existential Risk Initiative
OpenAI
Survival and Flourishing
Global Catastrophic Risk Institute
Center for Human-Compatible Artificial Intelligence
DeepMind
Human Compatible
Stuart Russell
Jaan Tallinn
Leverhulme Center for the Future of Intelligence
28 posts
Nonlinear Fund
Ought
AI interpretability
Redwood Research
Anthropic
Superintelligence
AI Alignment Forum
Instrumental convergence thesis
Malignant AI failure mode
53
The Slippery Slope from DALLE-2 to Deepfake Anarchy
stecas
1mo
11
126
Did OpenPhil ever publish their in-depth review of their three-year OpenAI grant?
Markus Amalthea Magnuson
5mo
2
51
Common misconceptions about OpenAI
Jacob_Hilton
3mo
2
165
2021 AI Alignment Literature Review and Charity Comparison
Larks
12mo
18
150
2020 AI Alignment Literature Review and Charity Comparison
Larks
1y
16
21
BERI is seeking new collaborators (2022)
sawyer
7mo
0
54
DeepMind is hiring Long-term Strategy & Governance researchers
vishal
1y
1
35
Visible Thoughts Project and Bounty Announcement
So8res
1y
2
48
DeepMind: Generally capable agents emerge from open-ended play
kokotajlod
1y
10
6
BERI is hiring a Deputy Director
sawyer
5mo
0
21
The Survival and Flourishing Fund grant applications open until August 23rd ($8m-$12m planned for dispersal)
Larks
1y
3
58
Primates vs birds: Is one brain architecture better than the other?
AI Impacts
3y
2
25
BERI seeking new collaborators
sawyer
1y
2
27
Cortés, Pizarro, and Afonso as Precedents for Takeover
AI Impacts
2y
17
21
Why mechanistic interpretability does not and cannot contribute to long-term AGI safety (from messages with a friend)
Remmelt
1d
2
50
A Barebones Guide to Mechanistic Interpretability Prerequisites
Neel Nanda
21d
1
47
Join the interpretability research hackathon
Esben Kran
1mo
0
22
The limited upside of interpretability
Peter S. Park
1mo
3
8
I there a demo of "You can't fetch the coffee if you're dead"?
Ram Rachum
1mo
3
109
Apply to the second ML for Alignment Bootcamp (MLAB 2) in Berkeley [Aug 15 - Fri Sept 2]
Buck
7mo
7
170
EA needs a hiring agency and Nonlinear will fund you to start one
Kat Woods
11mo
12
6
How likely are malign priors over objectives? [aborted WIP]
David Johnston
1mo
0
182
Listen to more EA content with The Nonlinear Library
Kat Woods
1y
89
41
AMA: Ought
stuhlmueller
4mo
52
2
Is it possible that SBF-linked funds haven't yet been transferred to Anthropic or that Anthropic would have to return these funds?
donegal
1mo
0
88
ARC is hiring alignment theory researchers
Paul_Christiano
1y
3
104
We're Redwood Research, we do applied alignment research, AMA
Buck
1y
49
75
Redwood Research is hiring for several roles
Jack R
1y
0