Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
14 posts
Logical Induction
Gears-Level
Inside/Outside View
Problem of Old Evidence
Logical Uncertainty
107 posts
Newsletters
Postmortems & Retrospectives
Bayesian Decision Theory
Physics
VNM Theorem
History of Rationality
Productivity
Gaming (videogames/tabletop)
Art
124
Logical induction for software engineers
Alex Flint
17d
2
4
Logical Inductors that trust their limits
Scott Garrabrant
6y
0
4
The set of Logical Inductors is not Convex
Scott Garrabrant
6y
0
4
A measure-theoretic generalization of logical induction
Vanessa Kosoy
5y
0
19
Two Major Obstacles for Logical Inductor Decision Theory
Scott Garrabrant
5y
0
4
Logical Induction with incomputable sequences
AlexMennen
5y
0
36
Corrigibility as outside view
TurnTrout
2y
11
39
Beware of black boxes in AI alignment research
cousin_it
4y
10
47
Relating HCH and Logical Induction
abramdemski
2y
4
73
Markets are Universal for Logical Induction
johnswentworth
3y
0
46
Radical Probabilism [Transcript]
abramdemski
2y
12
84
Toward a New Technical Explanation of Technical Explanation
abramdemski
4y
36
115
An Intuitive Guide to Garrabrant Induction
Mark Xu
1y
18
38
Asymptotic Decision Theory (Improved Writeup)
Diffractor
4y
14
25
[AN #112]: Engineering a Safer World
Rohin Shah
2y
2
8
The Alignment Newsletter #4: 04/30/18
Rohin Shah
4y
0
25
Alignment Newsletter #21
Rohin Shah
4y
0
20
Alignment Newsletter #42
Rohin Shah
3y
1
11
Alignment Newsletter #28
Rohin Shah
4y
0
31
[AN #81]: Universality as a potential solution to conceptual difficulties in intent alignment
Rohin Shah
2y
4
93
Alignment Newsletter One Year Retrospective
Rohin Shah
3y
31
21
[AN #56] Should ML researchers stop running experiments before making hypotheses?
Rohin Shah
3y
8
8
The Alignment Newsletter #7: 05/21/18
Rohin Shah
4y
0
18
Alignment Newsletter #44
Rohin Shah
3y
0
13
Asymptotic Logical Uncertainty: Concrete Failure of the Solomonoff Approach
Scott Garrabrant
7y
0
12
Generalizing Foundations of Decision Theory
abramdemski
5y
0
28
[AN #87]: What might happen as deep learning scales even further?
Rohin Shah
2y
0
23
[AN #84] Reviewing AI alignment work in 2018-19
Rohin Shah
2y
0