Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
620 posts
AI
Eliciting Latent Knowledge (ELK)
AI Robustness
Truthful AI
Autonomy and Choice
Intelligence Explosion
Social Media
Transcripts
114 posts
Infra-Bayesianism
Counterfactuals
Interviews
Audio
Logic & Mathematics
AXRP
Redwood Research
Domain Theory
Counterfactual Mugging
Newcomb's Problem
Formal Proof
Functional Decision Theory
45
Towards Hodge-podge Alignment
Cleo Nardo
1d
20
30
Take 12: RLHF's use is evidence that orgs will jam RL at real-world problems.
Charlie Steiner
19h
0
35
The "Minimal Latents" Approach to Natural Abstractions
johnswentworth
22h
6
213
AI alignment is distinct from its near-term applications
paulfchristiano
7d
5
73
Can we efficiently explain model behaviors?
paulfchristiano
4d
0
99
Trying to disambiguate different questions about whether RLHF is “good”
Buck
6d
39
55
High-level hopes for AI alignment
HoldenKarnofsky
5d
3
136
Using GPT-Eliezer against ChatGPT Jailbreaking
Stuart_Armstrong
14d
77
106
Finding gliders in the game of life
paulfchristiano
19d
7
37
Take 10: Fine-tuning with RLHF is aesthetically unsatisfying.
Charlie Steiner
7d
3
64
Verification Is Not Easier Than Generation In General
johnswentworth
14d
23
32
Concept extrapolation for hypothesis generation
Stuart_Armstrong
8d
2
113
Mechanistic anomaly detection and ELK
paulfchristiano
25d
17
25
Existential AI Safety is NOT separate from near-term applications
scasper
7d
15
106
Causal Scrubbing: a method for rigorously testing interpretability hypotheses [Redwood Research]
LawrenceC
17d
9
118
Apply to the Redwood Research Mechanistic Interpretability Experiment (REMIX), a research program in Berkeley
maxnadeau
1mo
14
27
Causal scrubbing: results on a paren balance checker
LawrenceC
17d
0
73
Some Lessons Learned from Studying Indirect Object Identification in GPT-2 small
KevinRoWang
1mo
5
134
Takeaways from our robust injury classifier project [Redwood Research]
dmz
3mo
9
15
Causal scrubbing: Appendix
LawrenceC
17d
0
51
A conversation about Katja's counterarguments to AI risk
Matthew Barnett
2mo
9
98
High-stakes alignment via adversarial training [Redwood Research report]
dmz
7mo
29
41
Infra-Exercises, Part 1
Diffractor
3mo
9
170
Redwood Research’s current project
Buck
1y
29
129
Why I'm excited about Redwood Research's current project
paulfchristiano
1y
6
39
Hessian and Basin volume
Vivek Hebbar
5mo
9
98
Infra-Bayesian physicalism: a formal theory of naturalized induction
Vanessa Kosoy
1y
20
39
Adversarial training, importance sampling, and anti-adversarial training for AI whistleblowing
Buck
6mo
0