Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
1446 posts
AI
Interpretability (ML & AI)
AI Timelines
GPT
Research Agendas
Value Learning
AI Takeoff
Conjecture (org)
Embedded Agency
Machine Learning (ML)
Eliciting Latent Knowledge (ELK)
Community
118 posts
Inner Alignment
Optimization
Solomonoff Induction
Predictive Processing
Selection vs Control
Neocortex
Mesa-Optimization
Neuroscience
Priors
AI Services (CAIS)
Occam's Razor
General Intelligence
27
Discovering Language Model Behaviors with Model-Written Evaluations
evhub
4h
3
62
Towards Hodge-podge Alignment
Cleo Nardo
1d
20
37
The "Minimal Latents" Approach to Natural Abstractions
johnswentworth
22h
6
10
Note on algorithms with multiple trained components
Steven Byrnes
7h
1
13
An Open Agency Architecture for Safe Transformative AI
davidad
11h
11
21
Take 12: RLHF's use is evidence that orgs will jam RL at real-world problems.
Charlie Steiner
19h
0
232
AI alignment is distinct from its near-term applications
paulfchristiano
7d
5
123
How "Discovering Latent Knowledge in Language Models Without Supervision" Fits Into a Broader Alignment Scheme
Collin
5d
18
63
Can we efficiently explain model behaviors?
paulfchristiano
4d
0
55
Proper scoring rules don’t guarantee predicting fixed points
Johannes_Treutlein
4d
2
29
Take 11: "Aligning language models" should be weirder.
Charlie Steiner
2d
0
92
Trying to disambiguate different questions about whether RLHF is “good”
Buck
6d
39
265
A challenge for AGI organizations, and a challenge for readers
Rob Bensinger
19d
30
18
Event [Berkeley]: Alignment Collaborator Speed-Meeting
AlexMennen
1d
2
60
Paper: Constitutional AI: Harmlessness from AI Feedback (Anthropic)
LawrenceC
4d
10
96
Inner and outer alignment decompose one hard problem into two extremely hard problems
TurnTrout
18d
18
61
My take on Jacob Cannell’s take on AGI safety
Steven Byrnes
22d
13
35
Mesa-Optimizers via Grokking
orthonormal
14d
4
26
Take 8: Queer the inner/outer alignment dichotomy.
Charlie Steiner
11d
2
37
Don't align agents to evaluations of plans
TurnTrout
24d
46
14
Take 6: CAIS is actually Orwellian.
Charlie Steiner
13d
5
55
Threat Model Literature Review
zac_kenton
1mo
4
103
What's General-Purpose Search, And Why Might We Expect To See It In Trained ML Systems?
johnswentworth
4mo
15
103
Externalized reasoning oversight: a research direction for language model alignment
tamera
4mo
22
52
Humans aren't fitness maximizers
So8res
2mo
45
20
Value Formation: An Overarching Model
Thane Ruthenis
1mo
6
68
Human Mimicry Mainly Works When We’re Already Close
johnswentworth
4mo
16
37
Framing AI Childhoods
David Udell
3mo
8