Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
14 posts
Logical Induction
Gears-Level
Inside/Outside View
Problem of Old Evidence
Logical Uncertainty
107 posts
Newsletters
Postmortems & Retrospectives
Bayesian Decision Theory
Physics
VNM Theorem
History of Rationality
Productivity
Gaming (videogames/tabletop)
Art
132
Logical induction for software engineers
Alex Flint
17d
2
97
An Intuitive Guide to Garrabrant Induction
Mark Xu
1y
18
80
Toward a New Technical Explanation of Technical Explanation
abramdemski
4y
36
62
Markets are Universal for Logical Induction
johnswentworth
3y
0
35
Asymptotic Decision Theory (Improved Writeup)
Diffractor
4y
14
32
Relating HCH and Logical Induction
abramdemski
2y
4
32
Radical Probabilism [Transcript]
abramdemski
2y
12
30
Corrigibility as outside view
TurnTrout
2y
11
28
Beware of black boxes in AI alignment research
cousin_it
4y
10
18
Two Major Obstacles for Logical Inductor Decision Theory
Scott Garrabrant
5y
0
4
A measure-theoretic generalization of logical induction
Vanessa Kosoy
5y
0
3
Logical Inductors that trust their limits
Scott Garrabrant
6y
0
3
The set of Logical Inductors is not Convex
Scott Garrabrant
6y
0
3
Logical Induction with incomputable sequences
AlexMennen
5y
0
146
Quintin's alignment papers roundup - week 1
Quintin Pope
3mo
5
95
History of the Development of Logical Induction
Scott Garrabrant
4y
4
82
Alignment Newsletter One Year Retrospective
Rohin Shah
3y
31
80
In Logical Time, All Games are Iterated Games
abramdemski
4y
8
72
QAPR 4: Inductive biases
Quintin Pope
2mo
2
69
Quintin's alignment papers roundup - week 2
Quintin Pope
3mo
2
61
Bayesian Probability is for things that are Space-like Separated from You
Scott Garrabrant
4y
22
60
Alignment Newsletter #13: 07/02/18
Rohin Shah
4y
12
40
[AN #166]: Is it crazy to claim we're in the most important century?
Rohin Shah
1y
5
38
Call for contributors to the Alignment Newsletter
Rohin Shah
3y
0
36
How to get value learning and reference wrong
Charlie Steiner
3y
2
35
[MLSN #6]: Transparency survey, provable robustness, ML models that predict the future
Dan H
2mo
0
34
Alignment Newsletter #15: 07/16/18
Rohin Shah
4y
0
31
[AN #59] How arguments for AI risk have changed over time
Rohin Shah
3y
4