Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
14 posts
Logical Induction
Gears-Level
Inside/Outside View
Problem of Old Evidence
Logical Uncertainty
107 posts
Newsletters
Postmortems & Retrospectives
Bayesian Decision Theory
Physics
VNM Theorem
History of Rationality
Productivity
Gaming (videogames/tabletop)
Art
133
An Intuitive Guide to Garrabrant Induction
Mark Xu
1y
18
116
Logical induction for software engineers
Alex Flint
17d
2
88
Toward a New Technical Explanation of Technical Explanation
abramdemski
4y
36
84
Markets are Universal for Logical Induction
johnswentworth
3y
0
62
Relating HCH and Logical Induction
abramdemski
2y
4
60
Radical Probabilism [Transcript]
abramdemski
2y
12
50
Beware of black boxes in AI alignment research
cousin_it
4y
10
42
Corrigibility as outside view
TurnTrout
2y
11
41
Asymptotic Decision Theory (Improved Writeup)
Diffractor
4y
14
20
Two Major Obstacles for Logical Inductor Decision Theory
Scott Garrabrant
5y
0
5
Logical Inductors that trust their limits
Scott Garrabrant
6y
0
5
The set of Logical Inductors is not Convex
Scott Garrabrant
6y
0
5
Logical Induction with incomputable sequences
AlexMennen
5y
0
4
A measure-theoretic generalization of logical induction
Vanessa Kosoy
5y
0
104
Alignment Newsletter One Year Retrospective
Rohin Shah
3y
31
95
Bayesian Probability is for things that are Space-like Separated from You
Scott Garrabrant
4y
22
92
Quintin's alignment papers roundup - week 1
Quintin Pope
3mo
5
90
In Logical Time, All Games are Iterated Games
abramdemski
4y
8
83
History of the Development of Logical Induction
Scott Garrabrant
4y
4
80
Alignment Newsletter #13: 07/02/18
Rohin Shah
4y
12
64
[AN #166]: Is it crazy to claim we're in the most important century?
Rohin Shah
1y
5
55
[AN #59] How arguments for AI risk have changed over time
Rohin Shah
3y
4
54
QAPR 4: Inductive biases
Quintin Pope
2mo
2
51
Quintin's alignment papers roundup - week 2
Quintin Pope
3mo
2
50
Alignment Newsletter #15: 07/16/18
Rohin Shah
4y
0
48
[AN #75]: Solving Atari and Go with learned game models, and thoughts from a MIRI employee
Rohin Shah
3y
1
40
[AN #81]: Universality as a potential solution to conceptual difficulties in intent alignment
Rohin Shah
2y
4
40
Call for contributors to the Alignment Newsletter
Rohin Shah
3y
0