Go Back
Choose this branch
You can't go any further
meritocratic
regular
democratic
hot
top
alive
62 posts
Interviews
Redwood Research
Organization Updates
AXRP
Adversarial Examples
Adversarial Training
AI Robustness
17 posts
Audio
184
High-stakes alignment via adversarial training [Redwood Research report]
dmz
7mo
29
164
Causal Scrubbing: a method for rigorously testing interpretability hypotheses [Redwood Research]
LawrenceC
17d
9
159
Apply to the Redwood Research Mechanistic Interpretability Experiment (REMIX), a research program in Berkeley
maxnadeau
1mo
14
143
Takeaways from our robust injury classifier project [Redwood Research]
dmz
3mo
9
121
Redwood Research’s current project
Buck
1y
29
108
I wanted to interview Eliezer Yudkowsky but he's busy so I simulated him instead
lsusr
1y
33
105
Some Lessons Learned from Studying Indirect Object Identification in GPT-2 small
KevinRoWang
1mo
5
98
Why I'm excited about Redwood Research's current project
paulfchristiano
1y
6
97
[Transcript] Richard Feynman on Why Questions
Grognor
10y
45
51
Conversation with Paul Christiano
abergal
3y
6
50
What I've been doing instead of writing
benkuhn
1y
3
50
We're Redwood Research, we do applied alignment research, AMA
Nate Thomas
1y
3
48
AI Alignment Podcast: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah
Palus Astra
2y
27
45
Two clarifications about "Strategic Background"
Rob Bensinger
4y
6
165
Curated conversations with brilliant rationalists
spencerg
1y
18
156
Announcing the LessWrong Curated Podcast
Ben Pace
6mo
17
104
Listen to top LessWrong posts with The Nonlinear Library
KatWoods
1y
27
57
New: use The Nonlinear Library to listen to the top LessWrong posts of all time
KatWoods
8mo
9
43
How and why to turn everything into audio
KatWoods
4mo
18
42
Podcast: Shoshannah Tekofsky on skilling up in AI safety, visiting Berkeley, and developing novel research ideas
Akash
25d
2
37
Shahar Avin On How To Regulate Advanced AI Systems
Michaël Trazzi
2mo
0
35
Steganography and the CycleGAN - alignment failure case study
Jan Czechowski
6mo
0
34
AXRP Episode 4 - Risks from Learned Optimization with Evan Hubinger
DanielFilan
1y
10
26
Me (Steve Byrnes) on the “Brain Inspired” podcast
Steven Byrnes
1mo
1
22
Which LessWrong content would you like recorded into audio/podcast form?
Ruby
3mo
11
20
Feelings of Admiration, Ruby <=> Miranda
Ruby
1y
0
20
An Audio Introduction to Nick Bostrom
PeterH
3mo
0
13
Podcasts on surveys, slower AI, AI arguments, etc
KatjaGrace
3mo
0