Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
10 posts
Intellectual Progress (Society-Level)
Practice & Philosophy of Science
Intellectual Progress via LessWrong
Economic Consequences of AGI
Superintelligence
Automation
Scholarship & Learning
21 posts
SERI MATS
AI Alignment Fieldbuilding
Distillation & Pedagogy
Information Hazards
PIBBSS
Privacy
135
Your posts should be on arXiv
JanBrauner
3mo
39
97
On Solving Problems Before They Appear: The Weird Epistemologies of Alignment
adamShimi
1y
11
95
Productive Mistakes, Not Perfect Answers
adamShimi
8mo
11
92
Intuitions about solving hard problems
Richard_Ngo
7mo
23
81
Intermittent Distillations #4: Semiconductors, Economics, Intelligence, and Technological Progress.
Mark Xu
1y
9
56
Suggestions of posts on the AF to review
adamShimi
1y
20
53
Epistemological Framing for AI Alignment Research
adamShimi
1y
7
32
Levels of Pluralism
adamShimi
4mo
0
30
Epistemic Artefacts of (conceptual) AI alignment research
Nora_Ammann
4mo
1
29
Characterizing Real-World Agents as a Research Meta-Strategy
johnswentworth
3y
4
207
Lessons learned from talking to >100 academics about AI safety
Marius Hobbhahn
2mo
16
192
Call For Distillers
johnswentworth
8mo
42
161
Most People Start With The Same Few Bad Ideas
johnswentworth
3mo
30
136
The Fusion Power Generator Scenario
johnswentworth
2y
29
119
Conjecture: Internal Infohazard Policy
Connor Leahy
4mo
6
82
ML Alignment Theory Program under Evan Hubinger
Oliver Zhang
1y
3
71
SERI MATS Program - Winter 2022 Cohort
Ryan Kidd
2mo
12
68
Principles of Privacy for Alignment Research
johnswentworth
4mo
30
61
Needed: AI infohazard policy
Vanessa Kosoy
2y
17
46
[An email with a bunch of links I sent an experienced ML researcher interested in learning about Alignment / x-safety.]
David Scott Krueger (formerly: capybaralet)
3mo
1
35
Economic AI Safety
jsteinhardt
1y
3
35
Behaviour Manifolds and the Hessian of the Total Loss - Notes and Criticism
Spencer Becker-Kahn
3mo
4
31
Reflections on the PIBBSS Fellowship 2022
Nora_Ammann
9d
0
28
Auditing games for high-level interpretability
Paul Colognese
1mo
1