Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
42 posts
Outer Alignment
Mesa-Optimization
Neuroscience
Neuromorphic AI
Predictive Processing
Neocortex
Computing Overhang
Planning & Decision-Making
Intentionality
Hansonian Pre-Rationality
Emergent Behavior ( Emergence )
29 posts
Optimization
General Intelligence
AI Services (CAIS)
Selection vs Control
Distinctions
Adaptation Executors
Narrow AI
World Modeling Techniques
166
Risks from Learned Optimization: Introduction
evhub
3y
42
147
Matt Botvinick on the spontaneous emergence of learning algorithms
Adam Scholl
2y
87
144
My computational framework for the brain
Steven Byrnes
2y
26
136
Inner Alignment in Salt-Starved Rats
Steven Byrnes
2y
39
110
Book review: "A Thousand Brains" by Jeff Hawkins
Steven Byrnes
1y
18
78
Risks from Learned Optimization: Conclusion and Related Work
evhub
3y
4
78
How uniform is the neocortex?
zhukeepa
2y
23
76
Inner alignment in the brain
Steven Byrnes
2y
16
75
Conditions for Mesa-Optimization
evhub
3y
48
68
Human Mimicry Mainly Works When We’re Already Close
johnswentworth
4mo
16
64
Brain-inspired AGI and the "lifetime anchor"
Steven Byrnes
1y
16
62
An Increasingly Manipulative Newsfeed
Michaël Trazzi
3y
16
61
"Inner Alignment Failures" Which Are Actually Outer Alignment Failures
johnswentworth
2y
38
61
My take on Jacob Cannell’s take on AGI safety
Steven Byrnes
22d
13
217
The ground of optimization
Alex Flint
2y
74
139
Selection vs Control
abramdemski
3y
25
118
Reframing Superintelligence: Comprehensive AI Services as General Intelligence
Rohin Shah
3y
75
103
What's General-Purpose Search, And Why Might We Expect To See It In Trained ML Systems?
johnswentworth
4mo
15
98
Optimization Amplifies
Scott Garrabrant
4y
12
79
Bottle Caps Aren't Optimisers
DanielFilan
4y
21
78
How special are human brains among animal brains?
zhukeepa
2y
38
76
Comments on CAIS
Richard_Ngo
3y
14
68
Optimization Concepts in the Game of Life
Vika
1y
15
64
Six AI Risk/Strategy Ideas
Wei_Dai
3y
18
58
Reflective Bayesianism
abramdemski
1y
27
52
Humans aren't fitness maximizers
So8res
2mo
45
52
Aligning a toy model of optimization
paulfchristiano
3y
26
51
Ngo and Yudkowsky on scientific reasoning and pivotal acts
Eliezer Yudkowsky
10mo
13