Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
80 posts
Oracle AI
Myopia
AI Boxing (Containment)
Deceptive Alignment
Deception
Acausal Trade
Self Fulfilling/Refuting Prophecies
Bounties (closed)
Parables & Fables
Superrationality
Values handshakes
Computer Security & Cryptography
88 posts
Conjecture (org)
Language Models
Refine
Agency
Deconfusion
Scaling Laws
Project Announcement
Encultured AI (org)
Tool AI
Definitions
PaLM
Prompt Engineering
55
Proper scoring rules don’t guarantee predicting fixed points
Johannes_Treutlein
4d
2
35
Side-channels: input versus output
davidad
8d
9
37
Steering Behaviour: Testing for (Non-)Myopia in Language Models
Evan R. Murphy
15d
16
87
Trying to Make a Treacherous Mesa-Optimizer
MadHatter
1mo
13
142
Decision theory does not imply that we get to have nice things
So8res
2mo
53
118
Monitoring for deceptive alignment
evhub
3mo
7
72
How likely is deceptive alignment?
evhub
3mo
21
35
Sticky goals: a concrete experiment for understanding deceptive alignment
evhub
3mo
13
43
Acceptability Verification: A Research Agenda
David Udell
5mo
0
291
The Parable of Predict-O-Matic
abramdemski
3y
42
26
Training goals for large language models
Johannes_Treutlein
5mo
5
18
Precursor checking for deceptive alignment
evhub
4mo
0
30
The Speed + Simplicity Prior is probably anti-deceptive
7mo
29
23
Framings of Deceptive Alignment
peterbarnett
7mo
6
27
Discovering Language Model Behaviors with Model-Written Evaluations
evhub
4h
3
29
Take 11: "Aligning language models" should be weirder.
Charlie Steiner
2d
0
80
[Interim research report] Taking features out of superposition with sparse autoencoders
Lee Sharkey
7d
10
159
The Singular Value Decompositions of Transformer Weight Matrices are Highly Interpretable
beren
22d
27
183
Conjecture: a retrospective after 8 months of work
Connor Leahy
27d
9
213
Mysteries of mode collapse
janus
1mo
35
103
What I Learned Running Refine
adamShimi
26d
5
472
Simulators
janus
3mo
103
85
Conjecture Second Hiring Round
Connor Leahy
27d
0
64
Searching for Search
NicholasKees
22d
6
82
Current themes in mechanistic interpretability research
Lee Sharkey
1mo
3
364
chinchilla's wild implications
nostalgebraist
4mo
114
123
Interpreting Neural Networks through the Polytope Lens
Sid Black
2mo
26
164
Language models seem to be much better than humans at next-token prediction
Buck
4mo
56