Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
47 posts
Interpretability (ML & AI)
Empiricism
5 posts
AI Success Models
Conservatism (AI)
Principal-Agent Problems
Market making (AI safety technique)
132
How "Discovering Latent Knowledge in Language Models Without Supervision" Fits Into a Broader Alignment Scheme
Collin
5d
18
20
Paper: Transformers learn in-context by gradient descent
LawrenceC
4d
11
239
The Plan - 2022 Update
johnswentworth
19d
33
53
Multi-Component Learning and S-Curves
Adam Jermyn
20d
24
43
"Cars and Elephants": a handwavy argument/analogy against mechanistic interpretability
David Scott Krueger (formerly: capybaralet)
1mo
25
33
Extracting and Evaluating Causal Direction in LLMs' Activations
Fabien Roger
6d
2
30
A Walkthrough of Interpretability in the Wild (w/ authors Kevin Wang, Arthur Conmy & Alexandre Variengien)
Neel Nanda
1mo
15
81
Real-Time Research Recording: Can a Transformer Re-Derive Positional Info?
Neel Nanda
1mo
14
29
Toy Models and Tegum Products
Adam Jermyn
1mo
7
83
Polysemanticity and Capacity in Neural Networks
Buck
2mo
9
75
Engineering Monosemanticity in Toy Models
Adam Jermyn
1mo
6
29
Subsets and quotients in interpretability
Erik Jenner
18d
1
422
A Mechanistic Interpretability Analysis of Grokking
Neel Nanda
4mo
39
57
A Walkthrough of A Mathematical Framework for Transformer Circuits
Neel Nanda
1mo
5
15
An Open Agency Architecture for Safe Transformative AI
davidad
11h
11
71
A positive case for how we might succeed at prosaic AI alignment
evhub
1y
47
66
Interpretability’s Alignment-Solving Potential: Analysis of 7 Scenarios
Evan R. Murphy
7mo
0
55
Solving the whole AGI control problem, version 0.0001
Steven Byrnes
1y
7
28
Pessimism About Unknown Unknowns Inspires Conservatism
michaelcohen
2y
2