Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
14 posts
Logical Induction
Gears-Level
Inside/Outside View
Problem of Old Evidence
Logical Uncertainty
107 posts
Newsletters
Postmortems & Retrospectives
Bayesian Decision Theory
Physics
VNM Theorem
History of Rationality
Productivity
Gaming (videogames/tabletop)
Art
132
Logical induction for software engineers
Alex Flint
17d
2
3
Logical Inductors that trust their limits
Scott Garrabrant
6y
0
3
The set of Logical Inductors is not Convex
Scott Garrabrant
6y
0
4
A measure-theoretic generalization of logical induction
Vanessa Kosoy
5y
0
18
Two Major Obstacles for Logical Inductor Decision Theory
Scott Garrabrant
5y
0
3
Logical Induction with incomputable sequences
AlexMennen
5y
0
30
Corrigibility as outside view
TurnTrout
2y
11
28
Beware of black boxes in AI alignment research
cousin_it
4y
10
32
Relating HCH and Logical Induction
abramdemski
2y
4
62
Markets are Universal for Logical Induction
johnswentworth
3y
0
32
Radical Probabilism [Transcript]
abramdemski
2y
12
80
Toward a New Technical Explanation of Technical Explanation
abramdemski
4y
36
97
An Intuitive Guide to Garrabrant Induction
Mark Xu
1y
18
35
Asymptotic Decision Theory (Improved Writeup)
Diffractor
4y
14
28
[AN #112]: Engineering a Safer World
Rohin Shah
2y
2
5
The Alignment Newsletter #4: 04/30/18
Rohin Shah
4y
0
19
Alignment Newsletter #21
Rohin Shah
4y
0
15
Alignment Newsletter #42
Rohin Shah
3y
1
9
Alignment Newsletter #28
Rohin Shah
4y
0
22
[AN #81]: Universality as a potential solution to conceptual difficulties in intent alignment
Rohin Shah
2y
4
82
Alignment Newsletter One Year Retrospective
Rohin Shah
3y
31
14
[AN #56] Should ML researchers stop running experiments before making hypotheses?
Rohin Shah
3y
8
5
The Alignment Newsletter #7: 05/21/18
Rohin Shah
4y
0
14
Alignment Newsletter #44
Rohin Shah
3y
0
8
Asymptotic Logical Uncertainty: Concrete Failure of the Solomonoff Approach
Scott Garrabrant
7y
0
11
Generalizing Foundations of Decision Theory
abramdemski
5y
0
24
[AN #87]: What might happen as deep learning scales even further?
Rohin Shah
2y
0
23
[AN #84] Reviewing AI alignment work in 2018-19
Rohin Shah
2y
0