Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
14 posts
Logical Induction
Gears-Level
Inside/Outside View
Problem of Old Evidence
Logical Uncertainty
107 posts
Newsletters
Postmortems & Retrospectives
Bayesian Decision Theory
Physics
VNM Theorem
History of Rationality
Productivity
Gaming (videogames/tabletop)
Art
124
Logical induction for software engineers
Alex Flint
17d
2
115
An Intuitive Guide to Garrabrant Induction
Mark Xu
1y
18
84
Toward a New Technical Explanation of Technical Explanation
abramdemski
4y
36
73
Markets are Universal for Logical Induction
johnswentworth
3y
0
47
Relating HCH and Logical Induction
abramdemski
2y
4
46
Radical Probabilism [Transcript]
abramdemski
2y
12
39
Beware of black boxes in AI alignment research
cousin_it
4y
10
38
Asymptotic Decision Theory (Improved Writeup)
Diffractor
4y
14
36
Corrigibility as outside view
TurnTrout
2y
11
19
Two Major Obstacles for Logical Inductor Decision Theory
Scott Garrabrant
5y
0
4
Logical Inductors that trust their limits
Scott Garrabrant
6y
0
4
The set of Logical Inductors is not Convex
Scott Garrabrant
6y
0
4
A measure-theoretic generalization of logical induction
Vanessa Kosoy
5y
0
4
Logical Induction with incomputable sequences
AlexMennen
5y
0
119
Quintin's alignment papers roundup - week 1
Quintin Pope
3mo
5
93
Alignment Newsletter One Year Retrospective
Rohin Shah
3y
31
89
History of the Development of Logical Induction
Scott Garrabrant
4y
4
85
In Logical Time, All Games are Iterated Games
abramdemski
4y
8
78
Bayesian Probability is for things that are Space-like Separated from You
Scott Garrabrant
4y
22
70
Alignment Newsletter #13: 07/02/18
Rohin Shah
4y
12
63
QAPR 4: Inductive biases
Quintin Pope
2mo
2
60
Quintin's alignment papers roundup - week 2
Quintin Pope
3mo
2
52
[AN #166]: Is it crazy to claim we're in the most important century?
Rohin Shah
1y
5
43
[AN #59] How arguments for AI risk have changed over time
Rohin Shah
3y
4
42
Alignment Newsletter #15: 07/16/18
Rohin Shah
4y
0
39
Call for contributors to the Alignment Newsletter
Rohin Shah
3y
0
38
[AN #75]: Solving Atari and Go with learned game models, and thoughts from a MIRI employee
Rohin Shah
3y
1
37
How to get value learning and reference wrong
Charlie Steiner
3y
2