Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
45 posts
AI Impacts
Machine Intelligence Research Institute
Berkeley Existential Risk Initiative
OpenAI
Survival and Flourishing
Global Catastrophic Risk Institute
Center for Human-Compatible Artificial Intelligence
DeepMind
Human Compatible
Stuart Russell
Jaan Tallinn
Leverhulme Center for the Future of Intelligence
28 posts
Nonlinear Fund
Ought
AI interpretability
Redwood Research
Anthropic
Superintelligence
AI Alignment Forum
Instrumental convergence thesis
Malignant AI failure mode
52
The Slippery Slope from DALLE-2 to Deepfake Anarchy
stecas
1mo
11
46
Common misconceptions about OpenAI
Jacob_Hilton
3mo
2
112
Did OpenPhil ever publish their in-depth review of their three-year OpenAI grant?
Markus Amalthea Magnuson
5mo
2
11
The Survival and Flourishing Fund grant applications open until August 23rd ($8m-$12m planned for dispersal)
Larks
1y
3
28
"Taking AI Risk Seriously" – Thoughts by Andrew Critch
Raemon
4y
9
41
Publication of Stuart Russell’s new book on AI safety - reviews needed
CaroJ
3y
8
44
AI Impacts: Historic trends in technological progress
Aaron Gertler
2y
5
133
2021 AI Alignment Literature Review and Charity Comparison
Larks
12mo
18
15
[Link] "Why Responsible AI Development Needs Cooperation on Safety" (OpenAI)
Milan_Griffes
3y
1
18
MIRI Update and Fundraising Case
So8res
6y
17
33
DeepMind: Generally capable agents emerge from open-ended play
kokotajlod
1y
10
9
MIRI 2017 Fundraiser and Strategy Update
malo
5y
4
12
BERI's "Project Grants" Program - Round One
rebecca_raible
4y
3
8
MIRI is seeking an Office Manager / Force Multiplier
RobBensinger
7y
1
26
Why mechanistic interpretability does not and cannot contribute to long-term AGI safety (from messages with a friend)
Remmelt
1d
2
19
The limited upside of interpretability
Peter S. Park
1mo
3
36
AMA: Ought
stuhlmueller
4mo
52
9
I there a demo of "You can't fetch the coffee if you're dead"?
Ram Rachum
1mo
3
41
A Barebones Guide to Mechanistic Interpretability Prerequisites
Neel Nanda
21d
1
193
Listen to more EA content with The Nonlinear Library
Kat Woods
1y
89
113
Apply to the second ML for Alignment Bootcamp (MLAB 2) in Berkeley [Aug 15 - Fri Sept 2]
Buck
7mo
7
192
EA needs a hiring agency and Nonlinear will fund you to start one
Kat Woods
11mo
12
12
The Case for Superintelligence Safety As A Cause: A Non-Technical Summary
HunterJay
3y
9
72
ARC is hiring alignment theory researchers
Paul_Christiano
1y
3
16
Binary prediction database and tournament
amandango
2y
0
14
Chris Olah on working at top AI labs without an undergrad degree
80000_Hours
1y
0
70
Redwood Research is hiring for several roles
Jack R
1y
0
9
[Linkpost] The Problem With The Current State of AGI Definitions
Yitz
6mo
0