Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
55 posts
Language Models
Agency
Deconfusion
Scaling Laws
Tool AI
Definitions
Simulation Hypothesis
PaLM
Prompt Engineering
Philosophy of Language
Carving / Clustering Reality
Astronomical Waste
33 posts
Conjecture (org)
Refine
Project Announcement
Encultured AI (org)
Analogy
28
Discovering Language Model Behaviors with Model-Written Evaluations
evhub
4h
3
185
Simulators
janus
3mo
103
28
Inverse scaling can become U-shaped
Edouard Harris
1mo
15
40
Paper: Large Language Models Can Self-improve [Linkpost]
Evan R. Murphy
2mo
14
163
Language models seem to be much better than humans at next-token prediction
Buck
4mo
56
75
Inverse Scaling Prize: Round 1 Winners
Ethan Perez
2mo
16
45
Smoke without fire is scary
Adam Jermyn
2mo
22
38
Beware over-use of the agent model
Alex Flint
1y
10
15
A Test for Language Model Consciousness
Ethan Perez
3mo
14
234
chinchilla's wild implications
nostalgebraist
4mo
114
63
Vingean Agency
abramdemski
3mo
13
22
Conditioning Generative Models for Alignment
Jozdien
5mo
8
22
Conditioning Generative Models
Adam Jermyn
5mo
18
7
Disentangling inner alignment failures
Erik Jenner
2mo
5
64
[Interim research report] Taking features out of superposition with sparse autoencoders
Lee Sharkey
7d
10
96
The Singular Value Decompositions of Transformer Weight Matrices are Highly Interpretable
beren
22d
27
178
Mysteries of mode collapse
janus
1mo
35
143
Conjecture: a retrospective after 8 months of work
Connor Leahy
27d
9
108
What I Learned Running Refine
adamShimi
26d
5
56
Interpreting Neural Networks through the Polytope Lens
Sid Black
2mo
26
41
Current themes in mechanistic interpretability research
Lee Sharkey
1mo
3
42
My Thoughts on the ML Safety Course
zeshen
2mo
3
61
Circumventing interpretability: How to defeat mind-readers
Lee Sharkey
5mo
8
41
the Insulated Goal-Program idea
carado
4mo
3
32
Encultured AI Pre-planning, Part 2: Providing a Service
Andrew_Critch
4mo
4
105
Announcing Encultured AI: Building a Video Game
Andrew_Critch
4mo
26
41
Abstracting The Hardness of Alignment: Unbounded Atomic Optimization
adamShimi
4mo
3
68
How to Diversify Conceptual Alignment: the Model Behind Refine
adamShimi
5mo
11