Go Back
Choose this branch
You can't go any further
meritocratic
regular
democratic
hot
top
alive
95 posts
Newsletters
Gaming (videogames/tabletop)
1 posts
92
Quintin's alignment papers roundup - week 1
Quintin Pope
3mo
5
64
[AN #166]: Is it crazy to claim we're in the most important century?
Rohin Shah
1y
5
55
[AN #59] How arguments for AI risk have changed over time
Rohin Shah
3y
4
54
QAPR 4: Inductive biases
Quintin Pope
2mo
2
51
Quintin's alignment papers roundup - week 2
Quintin Pope
3mo
2
50
Alignment Newsletter #15: 07/16/18
Rohin Shah
4y
0
48
[AN #75]: Solving Atari and Go with learned game models, and thoughts from a MIRI employee
Rohin Shah
3y
1
40
[AN #81]: Universality as a potential solution to conceptual difficulties in intent alignment
Rohin Shah
2y
4
40
Call for contributors to the Alignment Newsletter
Rohin Shah
3y
0
37
Alignment Newsletter #48
Rohin Shah
3y
14
37
Alignment Newsletter #39
Rohin Shah
3y
2
37
Alignment Newsletter #17
Rohin Shah
4y
0
35
[AN #57] Why we should focus on robustness in AI safety, and the analogous problems in programming
Rohin Shah
3y
15
32
[AN #87]: What might happen as deep learning scales even further?
Rohin Shah
2y
0
80
Alignment Newsletter #13: 07/02/18
Rohin Shah
4y
12