Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
80 posts
Oracle AI
Myopia
AI Boxing (Containment)
Deceptive Alignment
Deception
Acausal Trade
Self Fulfilling/Refuting Prophecies
Bounties (closed)
Parables & Fables
Superrationality
Values handshakes
Computer Security & Cryptography
88 posts
Conjecture (org)
Language Models
Refine
Agency
Deconfusion
Scaling Laws
Project Announcement
Encultured AI (org)
Tool AI
Definitions
PaLM
Prompt Engineering
35
Side-channels: input versus output
davidad
8d
9
55
Proper scoring rules don’t guarantee predicting fixed points
Johannes_Treutlein
4d
2
37
Steering Behaviour: Testing for (Non-)Myopia in Language Models
Evan R. Murphy
15d
16
142
Decision theory does not imply that we get to have nice things
So8res
2mo
53
72
How likely is deceptive alignment?
evhub
3mo
21
35
Sticky goals: a concrete experiment for understanding deceptive alignment
evhub
3mo
13
26
Training goals for large language models
Johannes_Treutlein
5mo
5
118
Monitoring for deceptive alignment
evhub
3mo
7
26
Understanding and controlling auto-induced distributional shift
LRudL
1y
3
12
Training Trace Priors
Adam Jermyn
6mo
17
50
LCDT, A Myopic Decision Theory
adamShimi
1y
51
291
The Parable of Predict-O-Matic
abramdemski
3y
42
31
Random Thoughts on Predict-O-Matic
abramdemski
3y
3
54
Cryptographic Boxes for Unfriendly AI
paulfchristiano
12y
162
27
Discovering Language Model Behaviors with Model-Written Evaluations
evhub
4h
3
80
[Interim research report] Taking features out of superposition with sparse autoencoders
Lee Sharkey
7d
10
159
The Singular Value Decompositions of Transformer Weight Matrices are Highly Interpretable
beren
22d
27
472
Simulators
janus
3mo
103
213
Mysteries of mode collapse
janus
1mo
35
27
Inverse scaling can become U-shaped
Edouard Harris
1mo
15
183
Conjecture: a retrospective after 8 months of work
Connor Leahy
27d
9
103
What I Learned Running Refine
adamShimi
26d
5
52
Paper: Large Language Models Can Self-improve [Linkpost]
Evan R. Murphy
2mo
14
164
Language models seem to be much better than humans at next-token prediction
Buck
4mo
56
88
Inverse Scaling Prize: Round 1 Winners
Ethan Perez
2mo
16
47
Smoke without fire is scary
Adam Jermyn
2mo
22
123
Interpreting Neural Networks through the Polytope Lens
Sid Black
2mo
26
28
Beware over-use of the agent model
Alex Flint
1y
10