Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
11 posts
Logical Uncertainty
Postmortems & Retrospectives
Bayesian Decision Theory
Physics
Productivity
History of Rationality
VNM Theorem
Art
96 posts
Newsletters
Gaming (videogames/tabletop)
93
Alignment Newsletter One Year Retrospective
Rohin Shah
3y
31
13
Asymptotic Logical Uncertainty: Concrete Failure of the Solomonoff Approach
Scott Garrabrant
7y
0
12
Generalizing Foundations of Decision Theory
abramdemski
5y
0
78
Bayesian Probability is for things that are Space-like Separated from You
Scott Garrabrant
4y
22
25
Bounded Oracle Induction
Diffractor
4y
0
37
How to get value learning and reference wrong
Charlie Steiner
3y
2
85
In Logical Time, All Games are Iterated Games
abramdemski
4y
8
89
History of the Development of Logical Induction
Scott Garrabrant
4y
4
15
Logical Uncertainty and Functional Decision Theory
swordsintoploughshares
4y
4
6
Brute-forcing the universe: a non-standard shot at diamond alignment
Martín Soto
28d
0
25
Beliefs at different timescales
Nisan
4y
12
25
[AN #112]: Engineering a Safer World
Rohin Shah
2y
2
8
The Alignment Newsletter #4: 04/30/18
Rohin Shah
4y
0
25
Alignment Newsletter #21
Rohin Shah
4y
0
20
Alignment Newsletter #42
Rohin Shah
3y
1
11
Alignment Newsletter #28
Rohin Shah
4y
0
31
[AN #81]: Universality as a potential solution to conceptual difficulties in intent alignment
Rohin Shah
2y
4
21
[AN #56] Should ML researchers stop running experiments before making hypotheses?
Rohin Shah
3y
8
8
The Alignment Newsletter #7: 05/21/18
Rohin Shah
4y
0
18
Alignment Newsletter #44
Rohin Shah
3y
0
28
[AN #87]: What might happen as deep learning scales even further?
Rohin Shah
2y
0
23
[AN #84] Reviewing AI alignment work in 2018-19
Rohin Shah
2y
0
19
[AN #82]: How OpenAI Five distributed their training computation
Rohin Shah
2y
0
16
[AN #89]: A unifying formalism for preference learning algorithms
Rohin Shah
2y
0
119
Quintin's alignment papers roundup - week 1
Quintin Pope
3mo
5