Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
42 posts
Outer Alignment
Mesa-Optimization
Neuroscience
Neuromorphic AI
Predictive Processing
Neocortex
Computing Overhang
Planning & Decision-Making
Intentionality
Hansonian Pre-Rationality
Emergent Behavior ( Emergence )
29 posts
Optimization
General Intelligence
AI Services (CAIS)
Selection vs Control
Distinctions
Adaptation Executors
Narrow AI
World Modeling Techniques
151
Matt Botvinick on the spontaneous emergence of learning algorithms
Adam Scholl
2y
87
148
Risks from Learned Optimization: Introduction
evhub
3y
42
141
Inner Alignment in Salt-Starved Rats
Steven Byrnes
2y
39
123
Book review: "A Thousand Brains" by Jeff Hawkins
Steven Byrnes
1y
18
114
My computational framework for the brain
Steven Byrnes
2y
26
85
Risks from Learned Optimization: Conclusion and Related Work
evhub
3y
4
78
"Inner Alignment Failures" Which Are Actually Outer Alignment Failures
johnswentworth
2y
38
78
How uniform is the neocortex?
zhukeepa
2y
23
77
Brain-inspired AGI and the "lifetime anchor"
Steven Byrnes
1y
16
75
My take on Jacob Cannell’s take on AGI safety
Steven Byrnes
22d
13
75
Conditions for Mesa-Optimization
evhub
3y
48
74
Mesa-Search vs Mesa-Control
abramdemski
2y
45
73
Inner alignment in the brain
Steven Byrnes
2y
16
73
An Increasingly Manipulative Newsfeed
Michaël Trazzi
3y
16
206
The ground of optimization
Alex Flint
2y
74
137
Selection vs Control
abramdemski
3y
25
102
Optimization Amplifies
Scott Garrabrant
4y
12
96
Reframing Superintelligence: Comprehensive AI Services as General Intelligence
Rohin Shah
3y
75
95
What's General-Purpose Search, And Why Might We Expect To See It In Trained ML Systems?
johnswentworth
4mo
15
89
Bottle Caps Aren't Optimisers
DanielFilan
4y
21
80
Reflective Bayesianism
abramdemski
1y
27
77
Comments on CAIS
Richard_Ngo
3y
14
77
How special are human brains among animal brains?
zhukeepa
2y
38
75
Six AI Risk/Strategy Ideas
Wei_Dai
3y
18
70
Optimization Concepts in the Game of Life
Vika
1y
15
61
Ngo and Yudkowsky on scientific reasoning and pivotal acts
Eliezer Yudkowsky
10mo
13
60
Aligning a toy model of optimization
paulfchristiano
3y
26
59
Humans aren't fitness maximizers
So8res
2mo
45