Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
1446 posts
AI
Interpretability (ML & AI)
AI Timelines
GPT
Research Agendas
Value Learning
AI Takeoff
Conjecture (org)
Embedded Agency
Machine Learning (ML)
Eliciting Latent Knowledge (ELK)
Community
118 posts
Inner Alignment
Optimization
Solomonoff Induction
Predictive Processing
Selection vs Control
Neocortex
Mesa-Optimization
Neuroscience
Priors
AI Services (CAIS)
Occam's Razor
General Intelligence
26
Discovering Language Model Behaviors with Model-Written Evaluations
evhub
4h
3
79
Towards Hodge-podge Alignment
Cleo Nardo
1d
20
39
The "Minimal Latents" Approach to Natural Abstractions
johnswentworth
22h
6
15
An Open Agency Architecture for Safe Transformative AI
davidad
11h
11
251
AI alignment is distinct from its near-term applications
paulfchristiano
7d
5
7
Note on algorithms with multiple trained components
Steven Byrnes
7h
1
132
How "Discovering Latent Knowledge in Language Models Without Supervision" Fits Into a Broader Alignment Scheme
Collin
5d
18
12
Take 12: RLHF's use is evidence that orgs will jam RL at real-world problems.
Charlie Steiner
19h
0
67
Proper scoring rules don’t guarantee predicting fixed points
Johannes_Treutlein
4d
2
31
Take 11: "Aligning language models" should be weirder.
Charlie Steiner
2d
0
307
A challenge for AGI organizations, and a challenge for readers
Rob Bensinger
19d
30
53
Can we efficiently explain model behaviors?
paulfchristiano
4d
0
96
[Interim research report] Taking features out of superposition with sparse autoencoders
Lee Sharkey
7d
10
85
Trying to disambiguate different questions about whether RLHF is “good”
Buck
6d
39
56
Paper: Constitutional AI: Harmlessness from AI Feedback (Anthropic)
LawrenceC
4d
10
102
Inner and outer alignment decompose one hard problem into two extremely hard problems
TurnTrout
18d
18
23
Take 8: Queer the inner/outer alignment dichotomy.
Charlie Steiner
11d
2
28
Mesa-Optimizers via Grokking
orthonormal
14d
4
47
My take on Jacob Cannell’s take on AGI safety
Steven Byrnes
22d
13
32
Don't align agents to evaluations of plans
TurnTrout
24d
46
65
Threat Model Literature Review
zac_kenton
1mo
4
127
Externalized reasoning oversight: a research direction for language model alignment
tamera
4mo
22
111
What's General-Purpose Search, And Why Might We Expect To See It In Trained ML Systems?
johnswentworth
4mo
15
8
Take 6: CAIS is actually Orwellian.
Charlie Steiner
13d
5
20
Value Formation: An Overarching Model
Thane Ruthenis
1mo
6
45
Humans aren't fitness maximizers
So8res
2mo
45
65
Human Mimicry Mainly Works When We’re Already Close
johnswentworth
4mo
16
41
Framing AI Childhoods
David Udell
3mo
8