Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
51 posts
Machine Learning (ML)
DeepMind
OpenAI
Truth, Semantics, & Meaning
Lottery Ticket Hypothesis
Honesty
Anthropic
Map and Territory
Calibration
52 posts
Interpretability (ML & AI)
AI Success Models
Conservatism (AI)
Principal-Agent Problems
Market making (AI safety technique)
Empiricism
223
A challenge for AGI organizations, and a challenge for readers
Rob Bensinger
19d
30
59
Reframing inner alignment
davidad
9d
13
19
My thoughts on OpenAI's Alignment plan
Donald Hobson
10d
0
318
DeepMind alignment team opinions on AGI ruin arguments
Vika
4mo
34
104
Caution when interpreting Deepmind's In-context RL paper
Sam Marks
1mo
6
199
Common misconceptions about OpenAI
Jacob_Hilton
3mo
138
64
Clarifying AI X-risk
zac_kenton
1mo
23
86
Paper: Discovering novel algorithms with AlphaTensor [Deepmind]
LawrenceC
2mo
18
66
Toy Models of Superposition
evhub
3mo
2
81
Survey of NLP Researchers: NLP is contributing to AGI progress; major catastrophe plausible
Sam Bowman
3mo
6
43
Paper+Summary: OMNIGROK: GROKKING BEYOND ALGORITHMIC DATA
Marius Hobbhahn
2mo
11
27
Maps and Blueprint; the Two Sides of the Alignment Equation
Nora_Ammann
1mo
1
84
Safety Implications of LeCun's path to machine intelligence
Ivan Vendrov
5mo
16
55
Autonomy as taking responsibility for reference maintenance
Ramana Kumar
4mo
3
11
An Open Agency Architecture for Safe Transformative AI
davidad
11h
11
114
How "Discovering Latent Knowledge in Language Models Without Supervision" Fits Into a Broader Alignment Scheme
Collin
5d
18
183
The Plan - 2022 Update
johnswentworth
19d
33
32
Paper: Transformers learn in-context by gradient descent
LawrenceC
4d
11
37
[ASoT] Natural abstractions and AlphaZero
Ulisse Mini
10d
1
56
Re-Examining LayerNorm
Eric Winsor
19d
8
61
Multi-Component Learning and S-Curves
Adam Jermyn
20d
24
11
Extracting and Evaluating Causal Direction in LLMs' Activations
Fabien Roger
6d
2
69
Engineering Monosemanticity in Toy Models
Adam Jermyn
1mo
6
254
A Mechanistic Interpretability Analysis of Grokking
Neel Nanda
4mo
39
19
Subsets and quotients in interpretability
Erik Jenner
18d
1
55
Real-Time Research Recording: Can a Transformer Re-Derive Positional Info?
Neel Nanda
1mo
14
51
"Cars and Elephants": a handwavy argument/analogy against mechanistic interpretability
David Scott Krueger (formerly: capybaralet)
1mo
25
73
Polysemanticity and Capacity in Neural Networks
Buck
2mo
9