Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
620 posts
AI
Eliciting Latent Knowledge (ELK)
AI Robustness
Truthful AI
Autonomy and Choice
Intelligence Explosion
Social Media
Transcripts
114 posts
Infra-Bayesianism
Counterfactuals
Interviews
Audio
Logic & Mathematics
AXRP
Redwood Research
Domain Theory
Counterfactual Mugging
Newcomb's Problem
Formal Proof
Functional Decision Theory
62
Towards Hodge-podge Alignment
Cleo Nardo
1d
20
37
The "Minimal Latents" Approach to Natural Abstractions
johnswentworth
22h
6
21
Take 12: RLHF's use is evidence that orgs will jam RL at real-world problems.
Charlie Steiner
19h
0
232
AI alignment is distinct from its near-term applications
paulfchristiano
7d
5
63
Can we efficiently explain model behaviors?
paulfchristiano
4d
0
92
Trying to disambiguate different questions about whether RLHF is “good”
Buck
6d
39
159
Using GPT-Eliezer against ChatGPT Jailbreaking
Stuart_Armstrong
14d
77
42
High-level hopes for AI alignment
HoldenKarnofsky
5d
3
37
Existential AI Safety is NOT separate from near-term applications
scasper
7d
15
91
Finding gliders in the game of life
paulfchristiano
19d
7
121
Mechanistic anomaly detection and ELK
paulfchristiano
25d
17
30
Take 10: Fine-tuning with RLHF is aesthetically unsatisfying.
Charlie Steiner
7d
3
56
Verification Is Not Easier Than Generation In General
johnswentworth
14d
23
68
Why Would AI "Aim" To Defeat Humanity?
HoldenKarnofsky
21d
9
130
Causal Scrubbing: a method for rigorously testing interpretability hypotheses [Redwood Research]
LawrenceC
17d
9
134
Apply to the Redwood Research Mechanistic Interpretability Experiment (REMIX), a research program in Berkeley
maxnadeau
1mo
14
26
Causal scrubbing: results on a paren balance checker
LawrenceC
17d
0
86
Some Lessons Learned from Studying Indirect Object Identification in GPT-2 small
KevinRoWang
1mo
5
135
Takeaways from our robust injury classifier project [Redwood Research]
dmz
3mo
9
16
Causal scrubbing: Appendix
LawrenceC
17d
0
43
A conversation about Katja's counterarguments to AI risk
Matthew Barnett
2mo
9
136
High-stakes alignment via adversarial training [Redwood Research report]
dmz
7mo
29
16
Vanessa Kosoy's PreDCA, distilled
Martín Soto
1mo
17
49
Infra-Exercises, Part 1
Diffractor
3mo
9
143
Redwood Research’s current project
Buck
1y
29
112
Why I'm excited about Redwood Research's current project
paulfchristiano
1y
6
98
Infra-Bayesian physicalism: a formal theory of naturalized induction
Vanessa Kosoy
1y
20
25
Ethan Perez on the Inverse Scaling Prize, Language Feedback and Red Teaming
Michaël Trazzi
3mo
0