Go Back
Choose this branch
You can't go any further
meritocratic
regular
democratic
hot
top
alive
62 posts
Interviews
Redwood Research
Organization Updates
AXRP
Adversarial Examples
Adversarial Training
AI Robustness
17 posts
Audio
143
Redwood Research’s current project
Buck
1y
29
136
High-stakes alignment via adversarial training [Redwood Research report]
dmz
7mo
29
135
Takeaways from our robust injury classifier project [Redwood Research]
dmz
3mo
9
134
Apply to the Redwood Research Mechanistic Interpretability Experiment (REMIX), a research program in Berkeley
maxnadeau
1mo
14
130
Causal Scrubbing: a method for rigorously testing interpretability hypotheses [Redwood Research]
LawrenceC
17d
9
116
[Transcript] Richard Feynman on Why Questions
Grognor
10y
45
112
Why I'm excited about Redwood Research's current project
paulfchristiano
1y
6
110
I wanted to interview Eliezer Yudkowsky but he's busy so I simulated him instead
lsusr
1y
33
86
Some Lessons Learned from Studying Indirect Object Identification in GPT-2 small
KevinRoWang
1mo
5
58
AI Alignment Podcast: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah
Palus Astra
2y
27
57
What I've been doing instead of writing
benkuhn
1y
3
56
We're Redwood Research, we do applied alignment research, AMA
Nate Thomas
1y
3
56
AXRP Episode 9 - Finite Factored Sets with Scott Garrabrant
DanielFilan
1y
2
48
Redwood's Technique-Focused Epistemic Strategy
adamShimi
1y
1
153
Curated conversations with brilliant rationalists
spencerg
1y
18
131
Announcing the LessWrong Curated Podcast
Ben Pace
6mo
17
74
Listen to top LessWrong posts with The Nonlinear Library
KatWoods
1y
27
46
How and why to turn everything into audio
KatWoods
4mo
18
41
AXRP Episode 4 - Risks from Learned Optimization with Evan Hubinger
DanielFilan
1y
10
39
New: use The Nonlinear Library to listen to the top LessWrong posts of all time
KatWoods
8mo
9
37
Podcast: Shoshannah Tekofsky on skilling up in AI safety, visiting Berkeley, and developing novel research ideas
Akash
25d
2
31
Shahar Avin On How To Regulate Advanced AI Systems
Michaël Trazzi
2mo
0
29
Which LessWrong content would you like recorded into audio/podcast form?
Ruby
3mo
11
28
Steganography and the CycleGAN - alignment failure case study
Jan Czechowski
6mo
0
26
Feelings of Admiration, Ruby <=> Miranda
Ruby
1y
0
26
Me (Steve Byrnes) on the “Brain Inspired” podcast
Steven Byrnes
1mo
1
14
Interview with Matt Freeman
Evenflair
29d
0
13
Podcasts on surveys, slower AI, AI arguments, etc
KatjaGrace
3mo
0