Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
11 posts
Logical Uncertainty
Postmortems & Retrospectives
Bayesian Decision Theory
Physics
Productivity
History of Rationality
VNM Theorem
Art
96 posts
Newsletters
Gaming (videogames/tabletop)
6
Brute-forcing the universe: a non-standard shot at diamond alignment
Martín Soto
28d
0
93
Alignment Newsletter One Year Retrospective
Rohin Shah
3y
31
89
History of the Development of Logical Induction
Scott Garrabrant
4y
4
85
In Logical Time, All Games are Iterated Games
abramdemski
4y
8
78
Bayesian Probability is for things that are Space-like Separated from You
Scott Garrabrant
4y
22
37
How to get value learning and reference wrong
Charlie Steiner
3y
2
25
Bounded Oracle Induction
Diffractor
4y
0
25
Beliefs at different timescales
Nisan
4y
12
15
Logical Uncertainty and Functional Decision Theory
swordsintoploughshares
4y
4
12
Generalizing Foundations of Decision Theory
abramdemski
5y
0
13
Asymptotic Logical Uncertainty: Concrete Failure of the Solomonoff Approach
Scott Garrabrant
7y
0
119
Quintin's alignment papers roundup - week 1
Quintin Pope
3mo
5
63
QAPR 4: Inductive biases
Quintin Pope
2mo
2
60
Quintin's alignment papers roundup - week 2
Quintin Pope
3mo
2
26
[MLSN #6]: Transparency survey, provable robustness, ML models that predict the future
Dan H
2mo
0
52
[AN #166]: Is it crazy to claim we're in the most important century?
Rohin Shah
1y
5
21
[AN #170]: Analyzing the argument for risk from power-seeking AI
Rohin Shah
1y
1
19
[AN #167]: Concrete ML safety problems and their relevance to x-risk
Rohin Shah
1y
4
70
Alignment Newsletter #13: 07/02/18
Rohin Shah
4y
12
38
[AN #75]: Solving Atari and Go with learned game models, and thoughts from a MIRI employee
Rohin Shah
3y
1
43
[AN #59] How arguments for AI risk have changed over time
Rohin Shah
3y
4
19
[AN #145]: Our three year anniversary!
Rohin Shah
1y
0
39
Call for contributors to the Alignment Newsletter
Rohin Shah
3y
0
25
[AN #112]: Engineering a Safer World
Rohin Shah
2y
2
31
[AN #81]: Universality as a potential solution to conceptual difficulties in intent alignment
Rohin Shah
2y
4