Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
126 posts
Newsletters
Logical Induction
Logical Uncertainty
Bayes' Theorem
Gears-Level
Radical Probabilism
Postmortems & Retrospectives
Inside/Outside View
Probability & Statistics
Betting
Problem of Old Evidence
Bayesian Decision Theory
563 posts
SERI MATS
Intellectual Progress (Society-Level)
Practice & Philosophy of Science
Epistemology
AI Alignment Fieldbuilding
Distillation & Pedagogy
Information Hazards
PIBBSS
Intellectual Progress (Individual-Level)
Intellectual Progress via LessWrong
Research Taste
Economic Consequences of AGI
116
Logical induction for software engineers
Alex Flint
17d
2
92
Quintin's alignment papers roundup - week 1
Quintin Pope
3mo
5
54
QAPR 4: Inductive biases
Quintin Pope
2mo
2
51
Quintin's alignment papers roundup - week 2
Quintin Pope
3mo
2
17
[MLSN #6]: Transparency survey, provable robustness, ML models that predict the future
Dan H
2mo
0
5
Brute-forcing the universe: a non-standard shot at diamond alignment
Martín Soto
28d
0
133
An Intuitive Guide to Garrabrant Induction
Mark Xu
1y
18
164
Radical Probabilism
abramdemski
2y
47
64
[AN #166]: Is it crazy to claim we're in the most important century?
Rohin Shah
1y
5
27
[AN #170]: Analyzing the argument for risk from power-seeking AI
Rohin Shah
1y
1
104
Alignment Newsletter One Year Retrospective
Rohin Shah
3y
31
62
Relating HCH and Logical Induction
abramdemski
2y
4
60
Radical Probabilism [Transcript]
abramdemski
2y
12
84
Markets are Universal for Logical Induction
johnswentworth
3y
0
20
Reflections on the PIBBSS Fellowship 2022
Nora_Ammann
9d
0
149
Lessons learned from talking to >100 academics about AI safety
Marius Hobbhahn
2mo
16
164
Most People Start With The Same Few Bad Ideas
johnswentworth
3mo
30
126
Your posts should be on arXiv
JanBrauner
3mo
39
66
SERI MATS Program - Winter 2022 Cohort
Ryan Kidd
2mo
12
95
How to do theoretical research, a personal perspective
Mark Xu
4mo
4
53
Methodological Therapy: An Agenda For Tackling Research Bottlenecks
adamShimi
2mo
6
163
Call For Distillers
johnswentworth
8mo
42
82
Principles of Privacy for Alignment Research
johnswentworth
4mo
30
71
Conjecture: Internal Infohazard Policy
Connor Leahy
4mo
6
39
[An email with a bunch of links I sent an experienced ML researcher interested in learning about Alignment / x-safety.]
David Scott Krueger (formerly: capybaralet)
3mo
1
16
Auditing games for high-level interpretability
Paul Colognese
1mo
1
87
Intuitions about solving hard problems
Richard_Ngo
7mo
23
90
Productive Mistakes, Not Perfect Answers
adamShimi
8mo
11