Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
126 posts
Newsletters
Logical Induction
Logical Uncertainty
Bayes' Theorem
Gears-Level
Radical Probabilism
Postmortems & Retrospectives
Inside/Outside View
Probability & Statistics
Betting
Problem of Old Evidence
Bayesian Decision Theory
563 posts
SERI MATS
Intellectual Progress (Society-Level)
Practice & Philosophy of Science
Epistemology
AI Alignment Fieldbuilding
Distillation & Pedagogy
Information Hazards
PIBBSS
Intellectual Progress (Individual-Level)
Intellectual Progress via LessWrong
Research Taste
Economic Consequences of AGI
124
Logical induction for software engineers
Alex Flint
17d
2
119
Quintin's alignment papers roundup - week 1
Quintin Pope
3mo
5
63
QAPR 4: Inductive biases
Quintin Pope
2mo
2
60
Quintin's alignment papers roundup - week 2
Quintin Pope
3mo
2
26
[MLSN #6]: Transparency survey, provable robustness, ML models that predict the future
Dan H
2mo
0
6
Brute-forcing the universe: a non-standard shot at diamond alignment
Martín Soto
28d
0
115
An Intuitive Guide to Garrabrant Induction
Mark Xu
1y
18
159
Radical Probabilism
abramdemski
2y
47
52
[AN #166]: Is it crazy to claim we're in the most important century?
Rohin Shah
1y
5
93
Alignment Newsletter One Year Retrospective
Rohin Shah
3y
31
21
[AN #170]: Analyzing the argument for risk from power-seeking AI
Rohin Shah
1y
1
73
Markets are Universal for Logical Induction
johnswentworth
3y
0
89
History of the Development of Logical Induction
Scott Garrabrant
4y
4
47
Relating HCH and Logical Induction
abramdemski
2y
4
31
Reflections on the PIBBSS Fellowship 2022
Nora_Ammann
9d
0
207
Lessons learned from talking to >100 academics about AI safety
Marius Hobbhahn
2mo
16
161
Most People Start With The Same Few Bad Ideas
johnswentworth
3mo
30
135
Your posts should be on arXiv
JanBrauner
3mo
39
71
SERI MATS Program - Winter 2022 Cohort
Ryan Kidd
2mo
12
119
Conjecture: Internal Infohazard Policy
Connor Leahy
4mo
6
84
How to do theoretical research, a personal perspective
Mark Xu
4mo
4
192
Call For Distillers
johnswentworth
8mo
42
28
Auditing games for high-level interpretability
Paul Colognese
1mo
1
54
Methodological Therapy: An Agenda For Tackling Research Bottlenecks
adamShimi
2mo
6
46
[An email with a bunch of links I sent an experienced ML researcher interested in learning about Alignment / x-safety.]
David Scott Krueger (formerly: capybaralet)
3mo
1
68
Principles of Privacy for Alignment Research
johnswentworth
4mo
30
92
Intuitions about solving hard problems
Richard_Ngo
7mo
23
95
Productive Mistakes, Not Perfect Answers
adamShimi
8mo
11