Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
11 posts
Logical Uncertainty
Postmortems & Retrospectives
Bayesian Decision Theory
Physics
Productivity
History of Rationality
VNM Theorem
Art
96 posts
Newsletters
Gaming (videogames/tabletop)
5
Brute-forcing the universe: a non-standard shot at diamond alignment
Martín Soto
28d
0
104
Alignment Newsletter One Year Retrospective
Rohin Shah
3y
31
95
Bayesian Probability is for things that are Space-like Separated from You
Scott Garrabrant
4y
22
90
In Logical Time, All Games are Iterated Games
abramdemski
4y
8
83
History of the Development of Logical Induction
Scott Garrabrant
4y
4
38
How to get value learning and reference wrong
Charlie Steiner
3y
2
33
Beliefs at different timescales
Nisan
4y
12
23
Bounded Oracle Induction
Diffractor
4y
0
18
Logical Uncertainty and Functional Decision Theory
swordsintoploughshares
4y
4
18
Asymptotic Logical Uncertainty: Concrete Failure of the Solomonoff Approach
Scott Garrabrant
7y
0
13
Generalizing Foundations of Decision Theory
abramdemski
5y
0
92
Quintin's alignment papers roundup - week 1
Quintin Pope
3mo
5
54
QAPR 4: Inductive biases
Quintin Pope
2mo
2
51
Quintin's alignment papers roundup - week 2
Quintin Pope
3mo
2
17
[MLSN #6]: Transparency survey, provable robustness, ML models that predict the future
Dan H
2mo
0
64
[AN #166]: Is it crazy to claim we're in the most important century?
Rohin Shah
1y
5
27
[AN #170]: Analyzing the argument for risk from power-seeking AI
Rohin Shah
1y
1
25
[AN #167]: Concrete ML safety problems and their relevance to x-risk
Rohin Shah
1y
4
80
Alignment Newsletter #13: 07/02/18
Rohin Shah
4y
12
26
[AN #145]: Our three year anniversary!
Rohin Shah
1y
0
48
[AN #75]: Solving Atari and Go with learned game models, and thoughts from a MIRI employee
Rohin Shah
3y
1
55
[AN #59] How arguments for AI risk have changed over time
Rohin Shah
3y
4
40
[AN #81]: Universality as a potential solution to conceptual difficulties in intent alignment
Rohin Shah
2y
4
40
Call for contributors to the Alignment Newsletter
Rohin Shah
3y
0
32
[AN #87]: What might happen as deep learning scales even further?
Rohin Shah
2y
0