Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
1564 posts
AI
Inner Alignment
Interpretability (ML & AI)
AI Timelines
GPT
Research Agendas
AI Takeoff
Value Learning
Machine Learning (ML)
Conjecture (org)
Mesa-Optimization
Outer Alignment
349 posts
Abstraction
Impact Regularization
Rationality
World Modeling
Decision Theory
Human Values
Goal-Directedness
Anthropics
Utility Functions
Finite Factored Sets
Shard Theory
Fixed Point Theorems
26
Discovering Language Model Behaviors with Model-Written Evaluations
evhub
4h
3
79
Towards Hodge-podge Alignment
Cleo Nardo
1d
20
39
The "Minimal Latents" Approach to Natural Abstractions
johnswentworth
22h
6
15
An Open Agency Architecture for Safe Transformative AI
davidad
11h
11
251
AI alignment is distinct from its near-term applications
paulfchristiano
7d
5
7
Note on algorithms with multiple trained components
Steven Byrnes
7h
1
132
How "Discovering Latent Knowledge in Language Models Without Supervision" Fits Into a Broader Alignment Scheme
Collin
5d
18
12
Take 12: RLHF's use is evidence that orgs will jam RL at real-world problems.
Charlie Steiner
19h
0
67
Proper scoring rules don’t guarantee predicting fixed points
Johannes_Treutlein
4d
2
56
Paper: Constitutional AI: Harmlessness from AI Feedback (Anthropic)
LawrenceC
4d
10
31
Take 11: "Aligning language models" should be weirder.
Charlie Steiner
2d
0
307
A challenge for AGI organizations, and a challenge for readers
Rob Bensinger
19d
30
53
Can we efficiently explain model behaviors?
paulfchristiano
4d
0
96
[Interim research report] Taking features out of superposition with sparse autoencoders
Lee Sharkey
7d
10
62
Shard Theory in Nine Theses: a Distillation and Critical Appraisal
LawrenceC
1d
9
171
Finite Factored Sets in Pictures
Magdalena Wache
9d
29
38
Positive values seem more robust and lasting than prohibitions
TurnTrout
3d
9
981
Where I agree and disagree with Eliezer
paulfchristiano
6mo
205
35
Take 7: You should talk about "the human's utility function" less.
Charlie Steiner
12d
22
52
Alignment allows "nonrobust" decision-influences and doesn't require robust grading
TurnTrout
21d
27
381
Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
Ajeya Cotra
5mo
89
249
The shard theory of human values
Quintin Pope
3mo
57
38
Open technical problem: A Quinean proof of Löb's theorem, for an easier cartoon guide
Andrew_Critch
26d
34
12
Working towards AI alignment is better
Johannes C. Mayer
11d
2
73
Contra shard theory, in the context of the diamond maximizer problem
So8res
2mo
16
191
Humans provide an untapped wealth of evidence about alignment
TurnTrout
5mo
92
32
Unpacking "Shard Theory" as Hunch, Question, Theory, and Insight
Jacy Reese Anthis
1mo
8
120
Shard Theory: An Overview
David Udell
4mo
34