Go Back
Choose this branch
Choose this branch
meritocratic
regular
democratic
hot
top
alive
31 posts
SERI MATS
AI Alignment Fieldbuilding
Intellectual Progress (Society-Level)
Distillation & Pedagogy
Practice & Philosophy of Science
Information Hazards
PIBBSS
Intellectual Progress via LessWrong
Economic Consequences of AGI
Privacy
Superintelligence
Automation
532 posts
Epistemology
Intellectual Progress (Individual-Level)
Research Taste
Epistemic Review
Selection Effects
Social & Cultural Dynamics
Humility
164
Most People Start With The Same Few Bad Ideas
johnswentworth
3mo
30
163
Call For Distillers
johnswentworth
8mo
42
149
Lessons learned from talking to >100 academics about AI safety
Marius Hobbhahn
2mo
16
139
The Fusion Power Generator Scenario
johnswentworth
2y
29
126
Your posts should be on arXiv
JanBrauner
3mo
39
95
On Solving Problems Before They Appear: The Weird Epistemologies of Alignment
adamShimi
1y
11
90
Productive Mistakes, Not Perfect Answers
adamShimi
8mo
11
87
Intuitions about solving hard problems
Richard_Ngo
7mo
23
82
Principles of Privacy for Alignment Research
johnswentworth
4mo
30
78
ML Alignment Theory Program under Evan Hubinger
Oliver Zhang
1y
3
72
Suggestions of posts on the AF to review
adamShimi
1y
20
72
Intermittent Distillations #4: Semiconductors, Economics, Intelligence, and Technological Progress.
Mark Xu
1y
9
71
Conjecture: Internal Infohazard Policy
Connor Leahy
4mo
6
70
Needed: AI infohazard policy
Vanessa Kosoy
2y
17
169
Alignment Research Field Guide
abramdemski
3y
9
95
How to do theoretical research, a personal perspective
Mark Xu
4mo
4
63
How I Formed My Own Views About AI Safety
Neel Nanda
9mo
6
53
Methodological Therapy: An Agenda For Tackling Research Bottlenecks
adamShimi
2mo
6
45
David Wolpert on Knowledge
Alex Flint
1y
3
41
Epistemic Strategies of Selection Theorems
adamShimi
1y
1
40
Forum Digest: Corrigibility, utility indifference, & related control ideas
Benya_Fallenstein
7y
0
38
AI Alignment Open Thread August 2019
habryka
3y
96
37
Uncertainty versus fuzziness versus extrapolation desiderata
Stuart_Armstrong
3y
8
34
Single player extensive-form games as a model of UDT
cousin_it
8y
26
33
To first order, moral realism and moral anti-realism are the same thing
Stuart_Armstrong
3y
8
31
AI Alignment Open Thread October 2019
habryka
3y
58
30
Being wrong in ethics
Stuart_Armstrong
3y
0
30
Hierarchical system preferences and subagent preferences
Stuart_Armstrong
3y
2