Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
21 posts
Ought
AI interpretability
Redwood Research
Anthropic
Alignment Research Center
7 posts
Nonlinear Fund
Superintelligence
AI Alignment Forum
Instrumental convergence thesis
Malignant AI failure mode
111
We're Redwood Research, we do applied alignment research, AMA
Buck
1y
49
105
Apply to the second ML for Alignment Bootcamp (MLAB 2) in Berkeley [Aug 15 - Fri Sept 2]
Buck
7mo
7
104
ARC is hiring alignment theory researchers
Paul_Christiano
1y
3
80
Redwood Research is hiring for several roles
Jack R
1y
0
59
Ought: why it matters and ways to help
Paul_Christiano
3y
5
59
A Barebones Guide to Mechanistic Interpretability Prerequisites
Neel Nanda
21d
1
49
Ought's theory of change
stuhlmueller
8mo
4
46
AMA: Ought
stuhlmueller
4mo
52
37
Join the interpretability research hackathon
Esben Kran
1mo
0
33
[Link] "Progress Update October 2019" (Ought)
Milan_Griffes
3y
1
25
The limited upside of interpretability
Peter S. Park
1mo
3
22
Automating reasoning about the future at Ought
jungofthewon
2y
0
16
Chris Olah on working at top AI labs without an undergrad degree
80000_Hours
1y
0
16
Why mechanistic interpretability does not and cannot contribute to long-term AGI safety (from messages with a friend)
Remmelt
1d
2
171
Listen to more EA content with The Nonlinear Library
Kat Woods
1y
89
148
EA needs a hiring agency and Nonlinear will fund you to start one
Kat Woods
11mo
12
47
I’ll pay you a $1,000 bounty for coming up with a good bounty (x-risk related)
Emerson Spartz
1y
48
12
The Case for Superintelligence Safety As A Cause: A Non-Technical Summary
HunterJay
3y
9
7
I there a demo of "You can't fetch the coffee if you're dead"?
Ram Rachum
1mo
3
6
How likely are malign priors over objectives? [aborted WIP]
David Johnston
1mo
0
5
[Linkpost] The Problem With The Current State of AGI Definitions
Yitz
6mo
0